You are here: Kaedrin > Weblog > Archives > Science & Technology

Science & Technology
Sunday, January 11, 2015

The Public Domain
I got curious about the Public Domain recently and was surprised by what I found. On the first day of each year, Public Domain Day celebrates the moment when copyrights expire, enter the Public Domain, and join their brethren, such as the plays of Shakespeare, the music of Mozart, and the books of Dickens. Once in the Public Domain, a work can be freely copied, remixed, translated into other languages, and adapted into stage plays, movies, or other media, free from restrictions. Because they are free to use, they can live on in perpetuity.

Of course, rights are based on jurisdiction, so not all countries will benefit equally every year. In 2015, our neighbors up north in Canada celebrated the entrance of the writings of Rachel Carlson, Ian Fleming, and Flannery O'Connor to the Public Domain (along with hundreds of others). I'd be curious how a James Bond movie made in Canada would fare here in the U.S., as they now have the right to make such a movie. Speaking of the U.S., how many works do you think entered our Public Domain this year?

Not a single published work will enter the Public Domain this year. Next year? Nope! In fact, no published work will enter the Public Domain until 2019. This is assuming that Congress does not, once again, extend the Copyright term even longer than it is now (which is currently the Author's lifetime plus 70 years) - which is how we ended up in this situation in the first place.

I've harped on this sort of thing before, so I won't belabor the point. I was just surprised that the Public Domain was so dead in the United States. Even works that gained notoriety for being accidentally let into the public domain, like It's a Wonderful Life, are being clamped down on. Ironically, It's a Wonderful Life only became famous once it was in the Public Domain and thus free to televise (frequent airings led to popularity). In the 1990s, the original copyright holder seized on some obscure court precedents and reasserted their rights based on the original musical score and the short story on which the film was based. The details of this are unclear, but the result is clear as crystal: it's not aired on TV very often anymore because NBC says they have exclusive rights (and they only air it a couple times a year) and derivative works, like a planned sequel, are continually blocked.

I don't know of a solution, but I did want to reflect on what the year could have brought us. There goes my plans for a Vertigo remake!
Posted by Mark on January 11, 2015 at 01:52 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, September 15, 2013

The Myth of Digital Distribution
The movie lover's dream service would be something we could subscribe to that would give us a comprehensive selection of movies to stream. This service is easy to conceive, and it's such an alluring idea that it makes people want to eschew tried-and-true distribution methods like DVDs and Blu-Ray. We've all heard the arguments before: physical media is dead, streaming is the future. When I made the move to Blu-Ray about 6 years ago, I estimated that it would take at least 10 years for a comprehensive streaming service to become feasible. The more I see, the more I think that I drastically underestimated that timeline... and am beginning to feel like it might never happen at all.

MGK illustrates the problem well with this example:
this is the point where someone says "but we're all going digital instead" and I get irritated by this because digital is hardly an answer. First off, renting films - and when you "buy" digital movies, that's what you're doing almost every single time - is not the same as buying them. Second, digital delivery is getting more and more sporadic as rights get more and more expensive for distributors to purchase.

As an example, take Wimbledon, a charming little 2004 sports film/romcom starring Paul Bettany and Kirsten Dunst. I am not saying Wimbledon is an unsung treasure or anything; it's a lesser offering from the Working Title factory that cranks out chipper British romcoms, a solid B-grade movie: well-written with a few flashes of inspiration, good performances all around (including a younger Nikolai Coster-Waldau before he became the Kingslayer) and mostly funny, although Jon Favreau's character is just annoying. But it's fun, and it's less than a decade old. It should be relatively easy to catch digitally, right? But no. It's not anywhere. And there are tons of Wimbledons out there.
Situations like this are an all too common occurrence, and not just with movies. It turns out that content owners can't be bothered with a title unless it's either new or in the public domain. This graph from a Rebecca Rosen article nicely illustrates the black hole that our extended copyright regime creates:
Books available by decade
Rosen explains:
[The graph] reveals, shockingly, that there are substantially more new editions available of books from the 1910s than from the 2000s. Editions of books that fall under copyright are available in about the same quantities as those from the first half of the 19th century. Publishers are simply not publishing copyrighted titles unless they are very recent.

The books that are the worst affected by this are those from pretty recent decades, such as the 80s and 90s, for which there is presumably the largest gap between what would satisfy some abstract notion of people's interest and what is actually available.
More interpretation:
This is not a gently sloping downward curve! Publishers seem unwilling to sell their books on Amazon for more than a few years after their initial publication. The data suggest that publishing business models make books disappear fairly shortly after their publication and long before they are scheduled to fall into the public domain. Copyright law then deters their reappearance as long as they are owned. On the left side of the graph before 1920, the decline presents a more gentle time-sensitive downward sloping curve.
This is absolutely absurd, though it's worth noting that it doesn't control for used books (which are generally pretty easy to find on Amazon) and while content owners don't seem to be rushing to digitize their catalog, future generations won't experience the same issue we're having with the 80s and 90s. Actually, I suspect they will have trouble with 80s and 90s content, but stuff from 2010 should theoretically be available on an indefinite basis because anything published today gets put on digital/streaming services.

Of course, intellectual property law being what it is, I'm sure that new proprietary formats and readers will render old digital copies obsolete, and once again, consumers will be hard pressed to see that 15 year old movie or book ported to the latest-and-greatest channel. It's a weird and ironic state of affairs when the content owners are so greedy in hoarding and protecting their works, yet so unwilling to actually, you know, profit from them.

I don't know what the solution is here. There have been some interesting ideas about having copyright expire for books that have been out of print for a certain period of time (say, 5-10 years), but that would only work now - again, future generations will theoretically have those digital versions available. They may be in a near obsolete format, but they're available! It doesn't seem likely that sensible copyright reform could be passed, but it would be nice to see if we could take a page from the open source playbook, but I'm seriously doubting that content owners would ever be that forward thinking.

As MGK noted, DVD ushered in an era of amazing availability, but much of that stuff has gone out of print, and we somehow appear to be regressing from that.
Posted by Mark on September 15, 2013 at 06:03 PM .: link :.

End of This Day's Posts

Wednesday, May 08, 2013

Kindle Updates
I have, for the most part, been very pleased with using my Kindle Touch to read over the past couple years. However, while it got the job done, I felt like there were a lot of missed opportunities, especially when it came to metadata and personal metrics. Well, Amazon just released a new update to their Kindle software, and mixed in with the usual (i.e. boring) updates to features I don't use (like "Whispersinc" or Parental Controls), there was this little gem:
The Time To Read feature uses your reading speed to let you know how much time is left before you finish your chapter or before you finish your book. Your specific reading speed is stored only on your Kindle Touch; it is not stored on Amazon servers.
Hot damn, that's exactly what I was asking for! Of course, it's all locked down and you can't really see what your reading speed is (or plot it over time, or by book, etc...), but this is the single most useful update to a device like this that I think I've ever encountered. Indeed, the fact that it tells you how much time until you finish both your chapter and the entire book is extremely useful, and it addresses my initial curmudgeonly complaints about the Kindle's hatred of page numbers and love of percentage.
Time to Read in Action
Will finish this book in about 4 hours!
The notion of measuring book length by time mitigates the issues surrounding book length by giving you a personalized measurement that is relevant and intuitive. No more futzing with the wild variability in page numbers or Amazon's bizarre location system, you can just peek at the remaining time, and it's all good.

And I love that they give a time to read for both the current chapter and the entire book. One of the frustrating things about reading an ebook is that you never really knew how long it will take to read a chapter. With a physical book, you can easily flip ahead and see where the chapter ends. Now, ebooks have that personalized time, which is perfect.

I haven't spent a lot of time with this new feature, but so far, I love it. I haven't done any formal tracking, but it seems accurate, too (it seems like I'm reading faster than it says, but it's close). It even seems to recognize when you've taken a break (though I'm not exactly sure of that). Of course, I would love it if Amazon would allow us access to the actual reading speed data in some way. I mean, I can appreciate their commitment to privacy, and I don't think that needs to change either; I'd just like to be able to see some reports on my actual reading speed. Plot it over time, see how different books impact speed, and so on. Maybe I'm just a data visualization nerd, but think of the graphs! I love this update, but they're still only scratching the surface here. There's a lot more there for the taking. Let's hope we're on our way...
Posted by Mark on May 08, 2013 at 08:42 PM .: link :.

End of This Day's Posts

Wednesday, April 24, 2013

The State of Streaming
So Netflix has had a good first quarter, exceeding expectations and crossing the $1 Billion revenue threshold. Stock prices have been skyrocketing, going from sub 100 to over 200 in just the past 4-5 months. Their subscriber base continues to grow, and fears that people would use the free trial to stream exclusive content like House of Cards, then bolt from the service seem unfounded. However, we're starting to see a fundamental shift in the way Netflix is doing business here. For the first time ever, I'm seeing statements like this:
As we continue to focus on exclusive and curated content, our willingness to pay for non-exclusive, bulk content deals declines.
I don't like the sound of that, but then, the cost of non-exclusive content seems to keep rising at an absurd level, and well, you know, it's not exclusive. The costs have risen to somewhere on the order of $2 billion per year on content licensing and original shows. So statements like this seem like a natural outgrowth of that cost:
As we've gained experience, we've realized that the 20th documentary about the financial crisis will mostly just take away viewing from the other 19 such docs, and instead of trying to have everything, we should strive to have the best in each category. As such, we are actively curating our service rather than carrying as many titles as we can.
We don't and can't compete on breadth with Comcast, Sky, Amazon, Apple, Microsoft, Sony, or Google. For us to be hugely successful we have to be a focused passion brand. Starbucks, not 7-Eleven. Southwest, not United. HBO, not Dish.
This all makes perfect sense from a business perspective, but as a consumer, this sucks. I don't want to have to subscribe to 8 different services to watch 8 different shows that seem interesting to me. Netflix's statements and priorities seem to be moving, for the first time, away from a goal of providing a streaming service with a wide, almost comprehensive selection of movies and television. Instead, we're getting a more curated approach coupled with original content. That wouldn't be the worst thing ever, but Netflix isn't the only one playing this game. Amazon just released 14 pilot episodes for their own exclusive content. I'm guessing it's only a matter of time before Hulu joins this roundalay (and for all I know, they're already there - I've just hated every experience I've had with Hulu so much that I don't really care to look into it). HBO is already doing its thing with HBO Go, which exlcusively streams their shows. How many other streaming services will I have to subscribe to if I want to watch TV (or movies) in the future? Like it or not, fragmentation is coming. And no one seems to be working on a comprehensive solution anymore (at least, not in a monthly subscription model - Amazon and iTunes have pretty good a la carte options). This is frustrating, and I feel like there's a big market for this thing, but at the same time, content owners seem to be overcharging for their content. If Netflix's crappy selection costs $2 billion a year, imagine what something even remotely comprehensive would cost (easily 5-10 times that amount, which is clearly not feasible).

Incidentally, Netflix's third exclusive series, Hemlock Grove, premiered this past weekend. I tried to watch the first episode, but I fell asleep. What I remember was pretty shlockey and not particularly inspiring... but I have a soft spot for cheesy stuff like this, so I'll give it another chance. Still, the response seems a bit mixed on this one. I did really end up enjoying House of Cards, but I'm not sure how much I'm going to stick with Hemlock Grove...
Posted by Mark on April 24, 2013 at 09:28 PM .: link :.

End of This Day's Posts

Sunday, January 06, 2013

What's in a Book Length?
I mentioned recently that book length is something that's been bugging me. It seems that we have a somewhat elastic relationship with length when it comes to books. The traditional indicator of book length is, of course, page number... but due to variability in font size, type, spacing, format, media, and margins, the hallowed page number may not be as concrete as we'd like. Ebooks theoretically provide an easier way to maintain a consistent measurement across different books, but it doesn't look like anyone's delivered on that promise. So how are we to know the lengths of our books? Fair warning, this post is about to get pretty darn nerdy, so read on at your own peril.

In terms of page numbers, books can vary wildly. Two books with the same amount of pages might be very different in terms of actual length. Let's take two examples: Gravity's Rainbow (784 pages) and Harry Potter and the Goblet of Fire (752 pages). Looking at page number alone, you'd say that Gravity's Rainbow is only slightly longer than Goblet of Fire. With the help of the magical internets, let's a closer look at the print inside the books (click image for a bigger version):
Pages from Gravitys Rainbow and Harry Potter and the Goblet of Fire
As you can see, there is much more text on the page in Gravity's Rainbow. Harry Potter has a smaller canvas to start with (at least, in terms of height), but larger margins, more line spacing, and I think even a slightly larger font. I don't believe it would be an exaggeration to say that when you take all this into account, the Harry Potter book is probably less than half the length of Gravity's Rainbow. I'd estimate it somewhere on the order of 300-350 pages. And that's even before we get into things like vocabulary and paragraph breaks (which I assume would also serve to inflate Harry Potter's length.) Now, this is an extreme example, but it illustrates the variability of page numbers.

Ebooks present a potential solution. Because Ebooks have different sized screens and even allow the reader to choose font sizes and other display options, page numbers start to seem irrelevant. So Ebook makers devised what's called reflowable documents, which adapt their presentation to the output device. For example, Amazon's Kindle uses an Ebook format that is reflowable. It does not (usually) feature page numbers, instead relying on a percentage indicator and the mysterious "Location" number.

The Location number is meant to be consistent, no matter what formatting options you're using on your ereader of choice. Sounds great, right? Well, the problem is that the Location number is pretty much just as arbitrary as page numbers. It is, of course, more granular than a page number, so you can easily skip to the exact location on multiple devices, but as for what actually constitutes a single "Location Number", that is a little more tricky.

In looking around the internets, it seems there is distressingly little information about what constitutes an actual Location. According to this thread on Amazon, someone claims that: "Each location is 128 bytes of data, including formatting and metadata." This rings true to me, but unfortunately, it also means that the Location number is pretty much meaningless.

The elastic relationship we have with book length is something I've always found interesting, but what made me want to write this post was when I wanted to pick a short book to read in early December. I was trying to make my 50 book reading goal, so I wanted something short. In looking through my book queue, I saw Alfred Bester's classic SF novel The Stars My Destination. It's one of those books I consistently see at the top of best SF lists, so it's always been on my radar, and looking at Amazon, I saw that it was only 236 pages long. Score! So I bought the ebook version and fired up my Kindle only to find that in terms of locations, it's the longest book I have on my Kindle (as of right now, I have 48 books on there). This is when I started looking around at Locations and trying to figure out what they meant. As it turns out, while the Location numbers provide a consistent reference within the book, they're not at all consistent across books.

I did a quick spot check of 6 books on my Kindle, looking at total Location numbers, total page numbers (resorting to print version when not estimated by Amazon), and file size of the ebook (in KB). I also added a column for Locations per page number and Locations per KB. This is an admittedly small sample, but what I found is that there is little consistency among any of the numbers. The notion of each Location being 128 bytes of data seems useful at first, especially when you consider that the KB information is readily available, but because that includes formatting and metadata, it's essentially meaningless. And the KB number also includes any media embedded in the book (i.e. illustrations crank up the KB, which distorts any calculations you might want to do with that data).

It turns out that The Stars My Destination will probably end up being relatively short, as the page numbers would imply. There's a fair amount of formatting within the book (which, by the way, doesn't look so hot on the Kindle), and doing spot checks of how many Locations I pass when cycling to the next screen, it appears that this particular ebook is going at a rate of about 12 Locations per cycle, while my previous book was going at a rate of around 5 or 6 per cycle. In other words, while the total Locations for The Stars My Destination were nearly twice what they were for my previously read book, I'm also cycling through Locations at double the rate. Meaning that, basically, this is the same length as my previous book.

Various attempts have been made to convert Location numbers to page numbers, with low degrees of success. This is due to the generally elastic nature of a page, combined with the inconsistent size of Locations. For most books, it seems like dividing the Location numbers by anywhere from 12-16 (the linked post posits dividing by 16.69, but the books I checked mostly ranged from 12-16) will get you a somewhat accurate page number count that is marginally consistent with print editions. Of course, for The Stars My Destination, that won't work at all. For that book, I have to divide by 40.86 to get close to the page number.

Why is this important at all? Well, there's clearly an issue with ebooks in academia, because citations are so important for that sort of work. Citing a location won't get readers of a paper anywhere close to a page number in a print edition (whereas, even using differing editions, you can usually track down the quote relatively easily if a page number is referenced). On a personal level, I enjoy reading ebooks, but one of the things I miss is the easy and instinctual notion of figuring out how long a book will take to read just by looking at it. Last year, I was shooting for reading quantity, so I wanted to tackle shorter books (this year, I'm trying not to pay attention to length as much and will be tackling a bunch of large, forbidding tomes, but that's a topic for another post)... but there really wasn't an easily accessible way to gauge the length. As we've discovered, both page numbers and Location numbers are inconsistent. In general, the larger the number, the longer the book, but as we've seen, that can be misleading in certain edge cases.

So what is the solution here? Well, we've managed to work with variable page numbers for thousands of years, so maybe no solution is really needed. A lot of newer ebooks even contain page numbers (despite the variation in display), so if we can find a way to make that more consistent, that might help make things a little better. But the ultimate solution would be to use something like Word Count. That's a number that might not be useful in the midst of reading a book, but if you're really looking to determine the actual length of the book, Word Count appears to be the best available measurement. It would also be quite easily calculated for ebooks. Is it perfect? Probably not, but it's better than page numbers or location numbers.

In the end, I enjoy using my Kindle to read books, but I wish they'd get on the ball with this sort of stuff. If you're still reading this (Kudos to you) and want to read some more babbling about ebooks and where I think they should be going, check out my initial thoughts and my ideas for additional metadata and the gamification of reading. The notion of ereaders really does open up a whole new world of possibilities... it's a shame that Amazon and other ereader companies keep their platforms so locked down and uninteresting. Of course, reading is its own reward, but I really feel like there's a lot more we can be doing with our ereader software and hardware.
Posted by Mark on January 06, 2013 at 08:02 PM .: link :.

End of This Day's Posts

Wednesday, August 08, 2012

Web browsers I have known, 1996-2012
Jason Kottke recently recapped all of the browsers he used as his default for the past 18 years. It sounded like fun, so I'm going to shamelessly steal the idea and list out my default browsers for the past 16 years (prior to 1996, I was stuck in the dark ages of dialup AOL - but once I went away to college and discovered the joys of T1/T3 connections, my browsing career started in earnest, so that's when I'm starting this list).
  • 1996 - Netscape Navigator 3 - This was pretty much the uncontested king of browsers at the time, but it's reign would be short. I had a copy of IE3 (I think?) on my computer too, but I almost never used it...
  • 1997-1998 - Netscape Communicator 4 - Basically Netscape Navigator 4, but the Communicator was a whole suite of applications which appealed to me at the time. I used it for email and even to start playing with some HTML editing (though I would eventually abandon everything but the browser from this suite). IE4 did come out sometime in this timeframe and I used it occasionally, but I think I stuck with NN4 way longer than I probably should have.
  • 1999-2000 - Internet Explorer 5 - With the release of IE5 and the increasing issues surrounding NN4, I finally jumped ship to Microsoft. I was never particularly comfortable with IE though, and so I was constantly looking for alternatives and trying new things. I believe early builds of Mozilla were available, and I kept downloading the updates in the hopes that it would allow me to dispense with IE, but it was still early in the process for Mozilla. This was also my first exposure to Opera, which at the time wasn't that remarkable (we're talking version 3.5 - 4 here) except that, as usual, they were ahead of the curve on tabbed browsing (a mixed blessing, as monitor resolutions at the time weren't great). Opera was also something you had to pay for at the time, and a lot of sites didn't work in Opera. This would all change at the end of 2000, though, with the release of Opera 5.
  • 2001 - Opera 5 - This browser changed everything for me. It was the first "free" Opera browser available, although the free version was ad-supported (quite annoying, but it was easy enough to get rid of the ads). The thing that was revolutionary about this browser, though, was mouse gestures. It was such a useful feature, and Opera's implementation was (and quite frankly, still is) the best, smoothest implementation of the functionality I've seen. At this point, I was working at a website, so for work, I was still using IE5 and IE6 as my primary browser (because at the time, they represented something like 85-90% of the traffic to our site). I was also still experimenting with the various Mozilla-based browsers at the time as well, but Opera was my default for personal browsing. Of course, no one codes for Opera, so there were plenty of sites that I'd have to fire up IE for (this has always been an issue with Opera)
  • 2002-2006 - Opera 6/7/8/9 - I pretty much kept rolling with Opera during this timeframe. Again, for my professional use, IE6/IE7 was still a must, but in 2004, Firefox 1.0 launched, so that added another variable to the mix. I wasn't completely won over by the initial Firefox offerings, but it was the first new browser in a long time that I thought had a bright future. It also provided a credible alternative for when Opera crapped out on a weirdly coded page. However, as web standards started to actually be implemented, Opera's issues became fewer as time went on...
  • 2007 - Firefox 2/Opera 9 - It was around this time that Firefox started to really assert itself in my personal and professional usage. I still used Opera a lot for personal usage, but for professional purposes, Firefox was a simple must. At the time, I was embroiled in a year-long site redesign project for my company, and I was doing a ton of HTML/CSS/JavaScript development... Firefox was an indispensable tool at the time, mostly due to extensions like Firebug and the Web-Developer Toolbar. I suppose I should note that Safari first came to my attention at this point, mostly for troubleshooting purposes. I freakin' hate that browser.
  • 2008-2011 - Firefox/Opera - After 2007, there was a slow, inexorable drive towards Firefox. Opera kept things interesting with a feature they call Speed Dial (and quite frankly, I like that feature much better than what Chrome and recent versions of Firefox have implemented), but the robust and mature list of extensions for Firefox were really difficult to compete with, especially when I was trying to get stuff done. Chrome also started to gain popularity in this timeframe, but while I loved how well it loaded Ajax and other JavaScript-heavy features, I could never really get comfortable with the interface. Firefox still afforded more control, and Opera's experience was generally better.
  • 2012/Present - Firefox - Well, I think it's pretty telling that I'm composing this post on Firefox. That being said, I still use Opera for simple browsing purposes semi-frequently. Indeed, I usually have both browsers open at all times on my personal computer. At work, I'm primarily using Firefox, but I'm still forced to use IE8, as our customers tend to still prefer IE (though the percentage is much less these days). I still avoid Safari like the plague (though I do sometimes need to troubleshoot and I suppose I do use Mobile Safari on my phone). I think I do need to give Chrome a closer look, as it's definitely more attractive these days...
Well, there you have it. I do wonder if I'll ever get over my stubborn love for Opera, a browser that almost no one but me uses. They really do manage to keep up with the times, and have even somewhat recently allowed Firefox and Chrome style extensions, though I think it's a little too late for them. FF and Chrome just have a more robust community surrounding their development than Opera. I feel like it's a browser fated to die at some point, but I'll probably continue to use it until it does... So what browser do you use?
Posted by Mark on August 08, 2012 at 09:23 PM .: link :.

End of This Day's Posts

Wednesday, April 11, 2012

More Disgruntled, Freakish Reflections on ebooks and Readers
While I have some pet peeves with the Kindle, I've mostly found it to be a good experience. That being said, there are some things I'd love to see in the future. These aren't really complaints, as some of this stuff isn't yet available, but there are a few opportunities afforded by the electronic nature of eBooks that would make the whole process better.
  • The Display - The electronic ink display that the basic Kindles use is fantastic... for reading text. Once you get beyond simple text, things are a little less fantastic. Things like diagrams, artwork, and photography aren't well represented in e-ink, and even in color readers (like the iPad or Kindle Fire), there are issues with resolution and formatting that often show up in eBooks. Much of this comes down to technology and cost, both of which are improving quickly. Once stuff like IMOD displays start to deliver on their promise (low power consumption, full color, readable in sunlight, easy on the eyes, capable of supporting video, etc...), we should see a new breed of reader.

    I'm not entirely sure how well this type of display will work, at least initially. For instance, how will it compare to the iPad 3's display? What's the resolution like? How much will it cost? And so on. Current implementations aren't full color, and I suspect that future iterations will go through a phase where the tech isn't quite there yet... but I think it will be good enough to move forward. I think Amazon will most certainly jump on this technology when it becomes feasible (both from a technical and cost perspective). I'm not sure if Apple would switch though. I feel like they'd want a much more robust and established display before they committed.
  • General Metrics and Metadata - While everyone would appreciate improvements in device displays, I'm not sure how important this would be. Maybe it's just me, but I'd love to see a lot more in the way of metadata and flexibility, both about the book and about device usage. With respect to the book itself, this gets to the whole page number issue I was whinging about in my previous post, but it's more than that. I'd love to see a statistical analysis of what I'm reading, on both individual and collective levels.

    I'm not entirely sure what this looks like, but it doesn't need to be rocket science. Simple Flesch-Kincaid grades seems like an easy enough place to start, and it would be pretty simple to implement. Calculating such things for my entire library (or a subset of my library), or ranking my library by grade (or similar sorting methods) would be interesting. I don't know that this would provide a huge amount of value, but I would personally find it very illuminating and fun to play around with... and it would be very easy to implement. Individual works wouldn't even require any processing power on the reader, it could be part of the download. Doing calculations of your collective library might be a little more complicated, but even that could probably be done in the cloud.

    Other metadata would also be interesting to view. For example, Goodreads will graph your recently read books by year of publication - a lot of analysis could be done about your collection (or a sub-grouping of your collection) of books along those lines. Groupings by decade or genre or reading level, all would be very interesting to know.
  • Personal Metrics and Metadata - Basically, I'd like to have a way to track my reading speed. For whatever reason, this is something I'm always trying to figure out for myself. I've never gone through the process of actually recording my reading habits and speeds because it would be tedious and manual and maybe not even all that accurate. But now that I'm reading books in an electronic format, there's no reason why the reader couldn't keep track of what I'm reading, when I'm reading, and how fast I'm reading. My anecdotal experience suggests that I read anywhere from 20-50 pages an hour, depending mostly on the book. As mentioned in the previous post, a lot of this has to do with the arbitrary nature of page numbers, so perhaps standardizing to a better metric (words per minute or something like that) would normalize my reading speed.

    Knowing my reading speed and graphing changes over time could be illuminating. I've played around a bit with speed reading software, and the results are interesting, but not drastic. In any case, one thing that would be really interesting to know when reading a book would be how much time you have left before you finish. Instead of having 200 pages, maybe you have 8 hours of reading time left.

    Combining my personal data with the general data could also yield some interesting results. Maybe I read trashy SF written before 1970 much faster than more contemporary literary fiction. Maybe I read long books faster than short books. There are a lot of possibilities here.

    There are a few catches to this whole personal metrics thing though. You'd need a way to account for breaks and interruptions. I might spend three hours reading tonight, but I'm sure I'll take a break to get a glass of water or answer a phone call, etc... There's not really an easy way around this, though there could be mitigating factors like when the reader goes to sleep mode or something like that. Another problem is that one device can be used by multiple people, which would require some sort of profile system. That might be fine, but it also adds a layer of complexity to the interface that I'm sure most companies would like to avoid. The biggest and most concerning potential issue is that of privacy. I'd love to see this information about myself, but would I want Amazon to have access to it? On the other hand, being able to aggregate data from all Kindles might prove interesting in its own right. Things like average reading speed, number of books read in a year, and so on. All interesting and useful info.

    This would require an openness and flexibility that Amazon has not yet demonstrated. It's encouraging that the Kindle Fire runs a flavor of Android (an open source OS), but on the other hand, it's a forked version that I'm sure isn't as free (as in speech) as I'd like (and from what I know, the Fire is partially limited by its hardware). Expecting comprehensive privacy controls from Amazon seems naive.

    I'd like to think that these metrics would be desirable to a large audience of readers, but I really have no inclination what the mass market appeal would be. It's something I'd actually like to see in a lot of other places too. Video games, for instance, provide a lot of opportunity for statistics, and some games provide a huge amount of data on your gaming habits (be it online or in a single player mode). Heck, half the fun of sports games (or sports in general) is tracking the progress of your players (particularly prospects). Other games provide a lack of depth that is most baffling. People should be playing meta-games like Fantasy Baseball, but with MLB The Show providing the data instead of real life.
  • The Gamification of Reading - Much of the above wanking about metrics could probably be summarized as a way to make reading a game. The metrics mentioned above readily lend themselves to point scores, social-app-like badges, and leaderboards. I don't know that this would necessarily be a good thing, but it could make for an intriguing system. There's an interesting psychology at work in systems like this, and I'd be curious to see if someone like Amazon could make reading more addictive. Assuming most people don't try to abuse the system (though there will always be a cohort that will attempt to exploit stuff like this), it could ultimately lead to beneficial effects for individuals who "play" the game competitively with their friends. Again, this isn't necessarily a good thing. Perhaps the gamification of reading will lead to a sacrifice of comprehension in the name of speed, or other mitigating effects. Still, it would be nice to see the "gamification of everything" used for something other than a way for companies to trick customers into buying their products.
As previously mentioned, the need for improved displays is a given (and not just for ereaders). But assuming these nutty metrics (and the gamification of reading) are an appealing concept, I'd like to think that it would provide an opening for someone to challenge Amazon in the market. An open, flexible device using a non-DRMed format and tied to a common store would be very nice. Throw in some game elements, add a great display, and you've got something close to my ideal reader. Unfortunately, it doesn't seem like we're all that close just yet. Maybe in 5-10 years? Seems possible, but it's probably more likely that Amazon will continue its dominance.
Posted by Mark on April 11, 2012 at 09:22 PM .: link :.

End of This Day's Posts

Wednesday, February 15, 2012

Last week, I looked at commonplace books and various implementation solutions. Ideally, I wanted something open and flexible that would also provide some degree of analysis in addition to the simple data aggregation most tools provide. I wanted something that would take into account a wide variety of sources in addition to my own writing (on this blog, for instance). Most tools provide a search capability of some kind, but I was hoping for something more advanced. Something that would make connections between data, or find similarities with something I'm currently writing.

At a first glance, Zemanta seemed like a promising candidate. It's a "content suggestion engine" specifically built for blogging and it comes pre-installed on a lot of blogging software (including Movable Type). I just had to activate it, which was pretty simple. Theoretically, it continually scans a post in progress (like this one) and provides content recommendations, ranging from simple text links defining key concepts (i.e. links to Wikipedia, IMDB, Amazon, etc...), to imagery (much of which seems to be integrated with Flickr and Wikipedia), to recommended blog posts from other folks' blogs. One of the things I thought was really neat was that I could input my own blogs, which would then give me more personalized recommendations.

Unfortunately, results so far have been mixed. There are some things I really like about Zemanta, but it's pretty clearly not the solution I'm looking for. Some assorted thoughts:

  • Zemanta will only work when using the WYSIWYG Rich Text editor, which turns out to be a huge pain in the arse.  I'm sure lots of people are probably fine with that, but I've been editing my blog posts in straight HTML for far too long. I suppose this is more of a hangup on my end than a problem with Zemanta, but it's definitely something I find annoying.  When I write a post in WYSIWYG format, I invariably switch it back to no formatting and jump through a bunch of hoops getting the post to look like what I want.
  • The recommended posts haven't been very useful so far. Some of the external choices are interesting, but so far, nothing has really helped me in writing my posts. I was really hoping that loading my blog into Zemanta would add a lot of value, but it turns out that Zemanta only really scanned my recent posts, and it sorta recommended most of them, which doesn't really help me that much.  I know what I've written recently, what I was hoping for was that Zemanta would be able to point out some post I wrote in 2005 along similar lines (In my previous post on Taxonomy Platforms, I specifically referenced the titles of some of my old blog posts, but since they were old, Zemanta didn't find them and recommend them.  Even more annoying, when writing this post, the Taxonomy Platforms post wasn't one of the recommended articles despite my specifically mentioning it. Update: It has it now, but it didn't seem to appear until after I'd already gone through the trouble of linking it...) It appears that Zemanta is basing all of this on my RSS feed, which makes sense, but I wish there was a way to upload my full archives, as that might make this tool a little more powerful...
  • The recommendations seem to be based on a relatively simplistic algorithm. A good search engine will index data and learn associations between individual words by tracking their frequency and how close they are to other words.  Zemanta doesn't seem to do that.  In my previous post, I referenced famous beer author Michael Jackson. What did Zemanta recommend?  Lots of pictures and articles about the musician, nothing about the beer journalist. I don't know if I'm expecting too much out of the system, but it would be nice if the software would pick up on the fact that this guy's name was showing up near lots of beer talk, with nary a reference to music. It's probably too much to hope that my specifically calling out that I was talking about "the beer critic, not the pop star" would influence the system (and indeed, my reference to "pop star" may have influenced the recommendations, despite the fact that I was trying to negate that).
  • The "In-Text Links", on the other hand, seem to come in quite handy. I actually leveraged several of them in my past few posts, and they were very easy to use. Indeed, I particularly appreciated their integration with Amazon, where I could enter my associates ID, and the links that were inserted were automatically generated with my ID. This is normally a pretty intensive process involving multiple steps that has been simplified down to the press of a button.  Very well done, and most of the suggestions there were very relevant.

I will probably continue to play with Zemanta, but I suspect it will be something that doesn't last much longer. It provides some value, but it's ultimately not as convenient as I'd like, and it's analysis and recommendation functions don't seem as useful as I'd like.

I've also been playing around with Evernote more and more, and I feel like that could be a useful tool, despite the fact that it doesn't really offer any sort of analysis (though it does have a simple search function). There's at least one third party, though, that seems to be positioning itself as an analysis tool that will integrate with Evernote.  That tool is called Topicmarks.  Unfortunately, I seem to be having some issues integrating my Evernote data with that service. At this rate, I don't know that I'll find a great tool for what I want, but it's an interesting subject, and I'm guessing it will be something that will become more and more important as time goes on. We're living in the Information Age, it seems only fair that our aggregation and analysis tools get more sophisticated.

Enhanced by Zemanta
Posted by Mark on February 15, 2012 at 06:08 PM .: link :.

End of This Day's Posts

Wednesday, February 08, 2012

During the Enlightenment, most intellectuals kept what's called a Commonplace Book. Basically, folks like John Locke or Mark Twain would curate transcriptions of interesting quotes from their readings. It was a personalized record of interesting ideas that the author encountered. When I first heard about the concept, I immediately started thinking of how I could implement one... which is when I realized that I've actually been keeping one, more or less, for the past decade or so on this blog. It's not very organized, though, and it's something that's been banging around in my head for the better part of the last year or so.

Locke was a big fan of Commonplace Books, and he spent years developing an intricate system for indexing his books' content. It was, of course, a ridiculous and painstaking process, but it worked. Fortunately for us, this is exactly the sort of thing that computer systems excel at, right? The reason I'm writing this post is a small confluence of events that has lead me to consider creating a more formal Commonplace Book. Despite my earlier musing on the subject, this blog doesn't really count. It's not really organized correctly, and I don't publish all the interesting quotes that I find. Even if I did, it's not really in a format that would do me much good. So I'd need to devise another plan.

Why do I need a plan at all? What's the benefit of a commonplace book? Well, I've been reading Steven Johnson's book Where Good Ideas Come From: The Natural History of Innovation and he mentions how he uses a computerized version of the commonplace book:
For more than a decade now, I have been curating a private digital archive of quotes that I've found intriguing, my twenty-first century version of the commonplace book. ... I keep all these quotes in a database using a program called DEVONthink, where I also store my own writing: chapters, essays, blog posts, notes. By combining my own words with passages from other sources, the collection becomes something more than just a file storage system. It becomes a digital extension of my imperfect memory, an archive of all my old ideas, and the ideas that have influenced me.
This DEVONthink software certainly sounds useful. It's apparently got this fancy AI that will generate semantic connections between quotes and what you're writing. It's advanced enough that many of those connections seem to be subtle and "lyrical", finding connections you didn't know you were looking for. It sounds perfect except for the fact that it only runs on Mac OSX. Drats. It's worth keeping in mind in case I ever do make the transition from PC to Mac, but it seems like lunacy to do so just to use this application (which, for all I know, will be useless to me).

As sheer happenstance, I've also been playing around with Pinterest lately, and it occurs to me that it's a sort of commonplace book, albeit one with more of a narrow focus on images and video (and recipes?) than quotes. There are actually quite a few sites like that. I've been curating a large selection of links on Delicious for years now (1600+ links on my account). Steven Johnson himself has recently contributed to a new web startup called Findings, which is primarily concerned with book quotes. All of this seems rather limiting, and quite frankly, I don't want to be using 7 completely different tools to do the same thing, but for different types of media.

I also took a look at Tumblr again, this time evaluating it from a commonplacing perspective. There are some really nice things about the interface and the ease with which you can curate your collection of media. The problem, though, is that their archiving system is even more useless than most blog software. It's not quite the hell that is Twitter archives, but that's a pretty low bar. Also, as near as I can tell, the data is locked up on their server, which means that even if I could find some sort of indexing and analysis tool to run through my data, I won't really be able to do so (Update: apparently Tumblr does have a backup tool, but only for use with OSX. Again!? What is it with you people? This is the internet, right? How hard is it to make this stuff open?)

Evernote shows a lot of promise and probably warrants further examination. It seems to be the go-to alternative for lots of researchers and writers. It's got a nice cloud implementation with a robust desktop client and the ability to export data as I see fit. I'm not sure if its search will be as sophisticated as what I ultimately want, but it could be an interesting tool.

Ultimately, I'm not sure the tool I'm looking for exists. DEVONthink sounds pretty close, but it's hard to tell how it will work without actually using the damn thing. The ideal would be a system where you can easily maintain a whole slew of data and metadata, to the point where I could be writing something (say a blog post or a requirements document for my job) and the tool would suggest relevant quotes/posts based on what I'm writing. This would probably be difficult to accmomplish in real-time, but a "Find related content" feature would still be pretty awesome. Anyone know of any alternatives?

Enhanced by ZemantaUpdate: Zemanta! I completely forgot about this. It comes installed by default with my blogging software, but I had turned it off a while ago because it took forever to load and was never really that useful. It's basically a content recommendation engine, pulling content from lots of internet sources (notably Wikipedia, Amazon, Flickr and IMDB). It's also grown considerably in the time since I'd last used it, and it now features a truckload of customization options, including the ability to separate general content recommendations from your own, personally curated sources. So far, I've only connected my two blogs to the software, but it would be interesting if I could integrate Zemanta with Evernote, Delicious, etc... I have no idea how great the recommendations will be (or how far back it will look on my blogs), but this could be exactly what I was looking for. Even if integration with other services isn't working, I could probably create myself another blog just for quotes, and then use that blog with Zemanta. I'll have to play around with this some more, but I'm intrigued by the possibilities
Posted by Mark on February 08, 2012 at 05:31 PM .: link :.

End of This Day's Posts

Wednesday, January 18, 2012

SOPA Blues
I was going to write the annual arbitrary movie awards tonight, but since the web has apparently gone on strike, I figured I'd spend a little time talking about that instead. Many sites, including the likes of Wikipedia and Reddit, have instituted a complete blackout as part of a protest against two ill-conceived pieces of censorship legislation currently being considered by the U.S. Congress (these laws are called the Stop Online Piracy Act and Protect Intellectual Property Act, henceforth to be referred to as SOPA and PIPA). I can't even begin to pretend that blacking out my humble little site would accomplish anything, but since a lot of my personal and professional livelihood depends on the internet, I suppose I can't ignore this either.

For the uninitiated, if the bills known as SOPA and PIPA become law, many websites could be taken offline involuntarily, without warning, and without due process of law, based on little more than an alleged copyright owner's unproven and uncontested allegations of infringement1. The reason Wikipedia is blacked out today is that they depend solely on user-contributed content, which means they would be a ripe target for overzealous copyright holders. Sites like Google haven't blacked themselves out, but have staged a bit of a protest as well, because under the provisions of the bill, even just linking to a site that infringes upon copyright is grounds for action (and thus search engines have a vested interest in defeating these bills). You could argue that these bills are well intentioned, and from what I can tell, their original purpose seemed to be more about foreign websites and DNS, but the road to hell is paved with good intentions and as written, these bills are completely absurd.

Lots of other sites have been registering their feelings on the matter. ArsTechnica has been posting up a storm. Shamus has a good post on the subject which is followed by a lively comment thread. But I think Aziz hits the nail on the head:
Looks like the DNS provisions in SOPA are getting pulled, and the House is delaying action on the bill until February, so it’s gratifying to see that the activism had an effect. However, that activism would have been put to better use to educate people about why DRM is harmful, why piracy should be fought not with law but with smarter pro-consumer marketing by content owners (lowered prices, more options for digital distribution, removal of DRM, fair use, and ubiquitous time-shifting). Look at the ridiculous limitations on Hulu Plus - even if you’re a paid subscriber, some shows won’t air episodes until the week after, old episodes are not always available, some episodes can only be watched on the computer and are restricted from mobile devices. These are utterly arbitrary limitations on watching content that just drive people into the pirates’ arms.
I may disagree with some of the other things in Aziz's post, but the above paragraph is important, and for some reason, people aren't talking about this aspect of the story. Sure, some folks are disputing the numbers, but few are pointing out the things that IP owners could be doing instead of legislation. For my money, the most important thing that IP owners have forgotten is convenience. Aziz points out Hulu, which is one of the worst services I've ever seen in terms of being convenient or even just intuitive to customers. I understand that piracy is frustrating for content owners and artists, but this is not the way to fight piracy. It might be disheartening to acknowledge that piracy will always exist, but it probably will, so we're going to have to figure out a way to deal with it. The one thing we've seen work is convenience. Despite the fact that iTunes had DRM, it was loose enough and convenient enough that it became a massive success (it now doesn't have DRM, which is even better). People want to spend money on this stuff, but more often than not, content owners are making it harder on the paying customer than on the pirate. SOPA/PIPA is just the latest example of this sort of thing.

I've already written about my thoughts on Intellectual Property, Copyright and DRM, so I encourage you to check that out. And if you're so inclined, you can find out what senators and representatives are supporting these bills, and throw them out in November (or in a few years, if need be). I also try to support companies or individuals that put out DRM-free content (for example, Louis CK's latest concert video has been made available, DRM free, and has apparently been a success).

Intellectual Property and Copyright is a big subject, and I have to be honest in that I don't have all the answers. But the way it works right now just doesn't seem right. A copyrighted work released just before I was born (i.e. Star Wars) probably won't enter the public domain until after I'm dead (I'm generally an optimistic guy, so I won't complain if I do make it to 2072, but still). Both protection and expiration are important parts of the way copyright works in the U.S. It's a balancing act, to be sure, but I think the pendulum has swung too far in one direction. Maybe it's time we swing it back. Now if you'll excuse me, I'm going to participate in a different kind of blackout to protest SOPA.

1 - Thanks to James for the concise description. There are lots of much longer longer and better sourced descriptions of the shortcomings of this bill and the issues surrounding it, so I won't belabor the point here.
Posted by Mark on January 18, 2012 at 06:20 PM .: link :.

End of This Day's Posts

Sunday, May 22, 2011

About two years ago (has it really been that long!?), I wrote a post about Interrupts and Context Switching. As long and ponderous as that post was, it was actually meant to be part of a larger series of posts. This post is meant to be the continuation of that original post and hopefully, I'll be able to get through the rest of the series in relatively short order (instead of dithering for another couple years). While I'm busy providing context, I should also note that this series was also planned for my internal work blog, but in the spirit of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Obviously, some of the specifics of my workplace have been removed from what follows, but it should still contain enough general value to be worthwhile.

In the previous post, I wrote about how computers and humans process information and in particular, how they handle switching between multiple different tasks. It turns out that computers are much better at switching tasks than humans are (for reasons belabored in that post). When humans want to do something that requires a lot of concentration and attention, such as computer programming or complex writing, they tend to work best when they have large amounts of uninterrupted time and can work in an environment that is quiet and free of distractions. Unfortunately, such environments can be difficult to find. As such, I thought it might be worth examining the source of most interruptions and distractions: communication.

Of course, this is a massive subject that can't even be summarized in something as trivial as a blog post (even one as long and bloviated as this one is turning out to be). That being said, it's worth examining in more detail because most interruptions we face are either directly or indirectly attributable to communication. In short, communication forces us to do context switching, which, as we've already established, is bad for getting things done.

Let's say that you're working on something large and complex. You've managed to get started and have reached a mental state that psychologists refer to as flow (also colloquially known as being "in the zone"). Flow is basically a condition of deep concentration and immersion. When you're in this state, you feel energized and often don't even recognize the passage of time. Seemingly difficult tasks no longer feel like they require much effort and the work just kinda... flows. Then someone stops by your desk to ask you an unrelated question. As a nice person and an accomodating coworker, you stop what you're doing, listen to the question and hopefully provide a helpful answer. This isn't necessarily a bad thing (we all enjoy helping other people out from time to time) but it also represents a series of context switches that would most likely break you out of your flow.

Not all work requires you to reach a state of flow in order to be productive, but for anyone involved in complex tasks like engineering, computer programming, design, or in-depth writing, flow is a necessity. Unfortunately, flow is somewhat fragile. It doesn't happen instantaneously; it requires a transition period where you refamiliarize yourself with the task at hand and the myriad issues and variables you need to consider. When your collegue departs and you can turn your attention back to the task at hand, you'll need to spend some time getting your brain back up to speed.

In isolation, the kind of interruption described above might still be alright every now and again, but imagine if the above scenario happened a couple dozen times in a day. If you're supposed to be working on something complicated, such a series of distractions would be disasterous. Unfortunately, I work for a 24/7 retail company and the nature of our business sometimes requires frequen interruptions and thus there are times when I am in a near constant state of context switching. Noe of this is to say I'm not part of the problem. I am certainly guilty of interrupting others, sometimes frequently, when I need some urgent information. This makes working on particularly complicated problems extremely difficult.

In the above example, there are only two people involved: you and the person asking you a question. However, in most workplace environments, that situation indirectly impacts the people around you as well. If they're immersed in their work, an unrelated conversation two cubes down may still break them out of their flow and slow their progress. This isn't nearly as bad as some workplaces that have a public address system - basically a way to interrupt hundreds or even thousands of people in order to reach one person - but it does still represent a challenge.

Now, the really insideous part about all this is that communication is really a good thing, a necessary thing. In a large scale organization, no one person can know everything, so communication is unavoidable. Meetings and phone calls can be indispensible sources of information and enablers of collaboration. The trick is to do this sort of thing in a way that interrupts as few people as possible. In some cases, this will be impossible. For example, urgency often forces disruptive communication (because you cannot afford to wait for an answer, you will need to be more intrusive). In other cases, there are ways to minimize the impact of frequent communication.

One way to minimize communication is to have frequently requested information documented in a common repository, so that if someone has a question, they can find it there instead of interrupting you (and potentially those around you). Naturally, this isn't quite as effective as we'd like, mostly because documenting information is a difficult and time consuming task in itself and one that often gets left out due to busy schedules and tight timelines. It turns out that documentation is hard! A while ago, Shamus wrote a terrific rant about technical documentation:
The stereotype is that technical people are bad at writing documentation. Technical people are supposedly inept at organizing information, bad at translating technical concepts into plain English, and useless at intuiting what the audience needs to know. There is a reason for this stereotype. It’s completely true.
I don't think it's quite as bad as Shamus points out, mostly because I think that most people suffer from the same issues as technical people. Technology tends to be complex and difficult to explain in the first place, so it's just more obvious there. Technology is also incredibly useful because it abstracts many difficult tasks, often through the use of metaphors. But when a user experiences the inevitable metaphor shear, they have to confront how the system really works, not the easy abstraction they've been using. This descent into technical details will almost always be a painful one, no matter how well documented something is, which is part of why documentation gets short shrift. I think the fact that there actually is documentation is usually a rather good sign. Then again, lots of things aren't documented at all.

There are numerous challenges for a documentation system. It takes resources, time, and motivation to write. It can become stale and inaccurate (sometimes this can happen very quickly) and thus it requires a good amount of maintenance (this can involve numerous other topics, such as version histories, automated alert systems, etc...). It has to be stored somewhere, and thus people have to know where and how to find it. And finally, the system for building, storing, maintaining, and using documentation has to be easy to learn and easy to use. This sounds all well and good, but in practice, it's a nonesuch beast. I don't want to get too carried away talking about documentation, so I'll leave it at that (if you're still interested, that nonesuch beast article is quite good). Ultimately, documentation is a good thing, but it's obviously not the only way to minimize communication strain.

I've previously mentioned that computer programming is one of those tasks that require a lot of concentration. As such, most programmers abhor interruptions. Interestingly, communication technology has been becoming more and more reliant on software. As such, it should be no surprise that a lot of new tools for communication are asynchronous, meaning that the exchange of information happens at each participant's own convenience. Email, for example, is asynchronous. You send an email to me. I choose when I want to review my messages and I also choose when I want to respond. Theoretically, email does not interrupt me (unless I use automated alerts for new email, such as the default Outlook behavior) and thus I can continue to work, uninterrupted.

The aformentioned documentation system is also a form of asynchronous communication and indeed, most of the internet itself could be considered a form of documentation. Even the communication tools used on the web are mostly asynchronous. Twitter, Facebook, YouTube, Flickr, blogs, message boards/forums, RSS and aggregators are all reliant on asynchronous communication. Mobile phones are obviously very popular, but I bet that SMS texting (which is asynchronous) is used just as much as voice, if not moreso (at least, for younger people). The only major communication tools invented in the past few decades that wouldn't be asynchronous are instant messaging and chat clients. And even those systems are often used in a more asynchronous way than traditional speech or conversation. (I suppose web conferencing is a relatively new communication tool, though it's really just an extension of conference calls.)

The benefit of asynchronous communication is, of course, that it doesn't (or at least it shouldn't) represent an interruption. If you're immersed in a particular task, you don't have to stop what you're doing to respond to an incoming communication request. You can deal with it at your own convenience. Furthermore, such correspondence (even in a supposedly short-lived medium like email) is usually stored for later reference. Such records are certainly valuable resources. Unfortunately, asynchronous communication has it's own set of difficulties as well.

Miscommunication is certainly a danger in any case, but it seems more prominent in the world of asynchronous communication. Since there is no easy back-and-forth in such a method, there is no room for clarification and one is often left only with their own interpretation. Miscommunication is doubly challenging because it creates an ongoing problem. What could have been a single conversation has now ballooned into several asynchronous touch-points and even the potential for wasted work.

One of my favorite quotations is from Anne Morrow Lindbergh:
To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!
It's difficult to beat the endless nuance of face-to-face communication, and for some discussions, nothing else will do. But as Lindbergh notes, communication is, in itself, a difficult proposition. Difficult, but necessary. About the best we can do is to attempt to minimize the misunderstanding.

I suppose one way to mitigate the possibility of miscommunication is to formalize the language in which the discussion is happening. This is easier said than done, as our friends in the legal department would no doubt say. Take a close look at a formal legal contract and you can clearly see the flaws in formal language. They are ostensibly written in English, but they require a lot of effort to compose or to read. Even then, opportunities for miscommunication or loopholes exist. Such a process makes sense when dealing with two separate organizations that each have their own agenda. But for internal collaboration purposes, such a formalization of communication would be disastrous.

You could consider computer languages a form of formal communication, but for most practical purposes, this would also fall short of a meaningful method of communication. At least, with other humans. The point of a computer language is to convert human thought into computational instructions that can be carried out in an almost mechanical fashion. While such a language is indeed very formal, it is also tedious, unintuitive, and difficult to compose and read. Our brains just don't work like that. Not to mention the fact that most of the communication efforts I'm talking about are the precursors to the writing of a computer program!

Despite all of this, a light formalization can be helpful and the fact that teams are required to produce important documentation practically requires a compromise between informal and formal methods of communication. In requirements specifications, for instance, I have found it quite beneficial to formally define various systems, acronyms, and other jargon that is referenced later in the document. This allows for a certain consistency within the document itself, and it also helps establish guidelines surrounding meaningful dialogue outside of the document. Of course, it wouldn't quite be up to legal standards and it would certainly lack the rigid syntax of computer languages, but it can still be helpful.

I am not an expert in linguistics, but it seems to me that spoken language is much richer and more complex than written language. Spoken language features numerous intricacies and tonal subtleties such as inflections and pauses. Indeed, spoken language often contains its own set of grammatical patterns which can be different than written language. Furthermore, face-to-face communication also consists of body language and other signs that can influence the meaning of what is said depending on the context in which it is spoken. This sort of nuance just isn't possible in written form.

This actually illustrates a wider problem. Again, I'm no linguist and haven't spent a ton of time examining the origins of language, but it seems to me that language emerged as a more immediate form of communication than what we use it for today. In other words, language was meant to be ephemeral, but with the advent of written language and improved technological means for recording communication (which are, historically, relatively recent developments), we're treating it differently. What was meant to be short-lived and transitory is now enduring and long-lived. As a result, we get things like the ever changing concept of political-correctness. Or, more relevant to this discussion, we get the aforementioned compromise between formal and informal language.

Another drawback to asynchronous communication is the propensity for over-communication. The CC field in an email can be a dangerous thing. It's very easy to broadcast your work out to many people, but the more this happens, the more difficult it becomes to keep track of all the incoming stimuli. Also, the language used in such a communication may be optimized for one type of reader, while the audience may be more general. This applies to other asynchronous methods as well. Documentation in a wiki is infamously difficult to categorize and find later. When you have an army of volunteers (as Wikipedia does), it's not as large a problem. But most organizations don't have such luxuries. Indeed, we're usually lucky if something is documented at all, let alone well organized and optimized.

The obvious question, which I've skipped over for most of this post (and, for that matter, the previous post), is: why communicate in the first place? If there are so many difficulties that arise out of communication, why not minimize such frivolities so that we can get something done?

Indeed, many of the greatest works in history were created by one mind. Sometimes, two. If I were to ask you to name the greatest inventor of all time, what would you say? Leonardo da Vinci or perhaps Thomas Edison. Both had workshops consisting of many helping hands, but their greatest ideas and conceptual integrity came from one man. Great works of literature? Shakespeare is the clear choice. Music? Bach, Mozart, Beethoven. Painting? da Vinci (again!), Rembrandt, Michelangelo. All individuals! There are collaborations as well, but usually only among two people. The Wright brothers, Gilbert and Sullivan, and so on.

So why has design and invention gone from solo efforts to group efforts? Why do we know the names of most of the inventors of 19th and early 20th century innovations, but not later achievements? For instance, who designed the Saturn V rocket? No one knows that, because it was a large team of people (and it was the culmination of numerous predecessors made by other teams of people). Why is that?

The biggest and most obvious answer is the increasing technological sophistication in nearly every area of engineering. The infamous Lazarus Long adage that "Specialization is for insects." notwithstanding, the amount of effort and specialization in various fields is astounding. Take a relatively obscure and narrow branch of mechanical engineering like Fluid Dynamics, and you'll find people devoting most of their life to the study of that field. Furthermore, the applications of that field go far beyond what we'd assume. Someone tinkering in their garage couldn't make the Saturn V alone. They'd require too much expertise in a wide and disparate array of fields.

This isn't to say that someone tinkering in their garage can't create something wonderful. Indeed, that's where the first personal computer came from! And we certainly know the names of many innovators today. Mark Zuckerberg and Larry Page/Sergey Brin immediately come to mind... but even their inventions spawned large companies with massive teams driving future innovation and optimization. It turns out that scaling a product up often takes more effort and more people than expected. (More information about the pros and cons of moving to a collaborative structure will have to wait for a separate post.)

And with more people comes more communication. It's a necessity. You cannot collaborate without large amounts of communication. In Tom DeMarco and Timothy Lister's book Peopleware, they call this the High-Tech Illusion:
...the widely held conviction among people who deal with any aspect of new technology (as who of us does not?) that they are in an intrinsically high-tech business. ... The researchers who made fundamental breakthroughs in those areas are in a high-tech business. The rest of us are appliers of their work. We use computers and other new technology components to develop our products or to organize our affairs. Because we go about this work in teams and projects and other tightly knit working groups, we are mostly in the human communication business. Our successes stem from good human interactions by all participants in the effort, and our failures stem from poor human interactions.
(Emphasis mine.) That insight is part of what initially inspired this series of posts. It's very astute, and most organizations work along those lines, and thus need to figure out ways to account for the additional costs of communication (this is particularly daunting, as such things are notoriously difficult to measure, but I'm getting ahead of myself). I suppose you could argue that both of these posts are somewhat inconclusive. Some of that is because they are part of a larger series, but also, as I've been known to say, human beings don't so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Recognizing and acknowledging the problems introduced by collaboration and communication is vital to working on any large project. As I mentioned towards the beginning of this post, this only really scratches the surface of the subject of communication, but for the purposes of this series, I think I've blathered on long enough. My next topic in this series will probably cover the various difficulties of providing estimates. I'm hoping the groundwork laid in these first two posts will mean that the next post won't be quite so long, but you never know!
Posted by Mark on May 22, 2011 at 07:51 PM .: link :.

End of This Day's Posts

Sunday, April 03, 2011

Unnecessary Gadgets
So the NY Times has an article debating the necessity of the various gadgets. The argument here is that we're seeing a lot of convergence in tech devices, and that many technologies that once warranted a dedicated device are now covered by something else. Let's take a look at their devices, what they said, and what I think:
  • Desktop Computer - NYT says to chuck it in favor of laptops. I'm a little more skeptical. Laptops are certainly better now than they've ever been, but I've been hearing about desktop-killers for decades now and I'm not even that old (ditto for thin clients, though the newest hype around the "cloud" computing thing is slightly more appealing - but even that won't supplant desktops entirely). I think desktops will be here to stay. I've got a fair amount of experience with both personal and work laptops, and I have to say that they're both inferior to desktops. This is fine when I need to use the portability, but that's not often enough to justify some of the pain of using laptops. For instance, I'm not sure what kinda graphics capabilities my work laptop has, but it really can't handle my dual-monitor setup, and even on one monitor, the display is definitely crappier than my old desktop (and that thing was ancient). I do think we're going to see some fundamental changes in the desktop/laptop/smartphone realm. The three form factors are all fundamentally useful in their own way, but I'd still expect some sort of convergence in the next decade or so. I'm expecting that smartphones will become ubiquitous, and perhaps become some sort of portable profile that you could use across your various devices. That's a more long term thing though.
  • High Speed Internet at Home - NYT says to keep it, and I agree. Until we can get a real 4G network (i.e. not the slightly enhanced 3G stuff the current telecom companies are peddling), there's no real question here.
  • Cable TV - NYT plays the "maybe" card on this one, but I think i can go along with that. It all depends on whether you watch TV or not (and/or if you enjoy live TV, like sporting events). I'm on the fence with this one myself. I have cable, and a DVR does make dealing with broadcast television much easier, and I like the opportunities afforded by OnDemand, etc... But it is quite expensive. If I ever get into a situation where I need to start pinching pennies, Cable is going to be among the first things to go.
  • Point and Shoot Camera - NYT says to lose it in favor of the smartphone, and I probably agree. Obviously there's still a market for dedicated high-end cameras, but the small point-and-click ones are quickly being outclassed by their fledgling smartphone siblings. My current iPhone camera is kinda crappy (2 MP, no flash), but even that works ok for my purposes. There are definitely times when I wish I had a flash or better quality, but they're relatively rare and I've had this phone for like 3 years now (probably upgrading this summer). My next camera will most likely meet all my photography needs.
  • Camcorder - NYT says to lose it, and that makes a sort of sense. As they say, camcorders are getting squeezed from both ends of the spectrum, with smartphones and cheap flip cameras on one end, and high end cameras on the other. I don't really know much about this though. I'm betting that camcorders will still be around, just not quite as popular as before.
  • USB Thumb Drive - NYT says lose it, and I think I agree, though not necessarily for the same reasons. They think that the internet means you don't need to use physical media to transfer data anymore. I suppose there's something to that, but my guess is that Smartphones could easily pick up the slack and allow for portable data without a dedicated device. That being said, I've used a thumb drive, like, 3 times in my life.
  • Digital Music Player - NYT says ditch it in favor of smartphones, with the added caveat that people who exercise a lot might like a smaller, dedicated device. I can see that, but on a personal level, I have both and don't mind it at all. I don't like using up my phone battery playing music, and I honestly don't really like the iPhone music player interface, so I actually have a regular old iPod nano for music and podcasts (also, I like to have manual control over what music/podcasts get on my device, and that's weird on the iPhone - at least, it used to be). My setup works fine for me most times, and in an emergency, I do have music (and a couple movies) on my iPhone, so I could make due.
  • Alarm Clock - NYT says keep it, though I'm not entirely convinced. Then again, I have an alarm clock, so I can't mount much of a offense against it. I've realized, though, that the grand majority of clocks that I use in my house are automatically updated (Cable box, computers, phone) and synced with some external source (no worrying about DST, etc...) My alarm clock isn't, though. I still use my phone as a failsafe for when I know I need to get up early, but that's more based on the possibility of snoozing myself into oblivion (I can easily snooze for well over an hour). I think I may actually end up replacing my clock, but I can see some young whipper-snappers relying on some other device for their wakeup calls...
  • GPS Unit - NYT says lose it, and I agree. With the number of smartphone apps (excluding the ones that come with your phone, which are usually functional but still kinda clunky as a full GPS system) that are good at this sort of thing (and a lot cheaper), I can't see how anyone could really justify a dedicated device for this. On a recent trip, a friend used Navigon's Mobile Navigator ($30, and usable on any of his portable devices) and it worked like a charm. Just as good as any GPS I've ever used. The only problem, again, is that it will drain the phone battery (unless you plug it in, which we did).
  • Books - NYT says to keep them, and I mostly agree. The only time I can see really wanting to use a dedicated eReader is when travelling, and even then, I'd want it to be a broad device, not dedicated to books. I have considered the Kindle (as it comes down in price), but for now, I'm holding out on a tablet device that will actually have a good enough screen for this sort of thing. Which, I understand, isn't too far off on the horizon. There are a couple of other nice things about digital books though, namely, the ability to easily mark favorite passages, or to do a search (two things that would probably save me a lot of time). I can't see books every going away, but I can see digital readers being a part of my life too.
A lot of these made me think of Neal Stephenson's System of the World. In that book, one of the characters ponders how new systems supplant older systems:
"It has been my view for some years that a new System of the World is being created around us. I used to suppose that it would drive out and annihilate any older Systems. But things I have seen recently ... have convinced me that new Systems never replace old ones, but only surround and encapsulate them, even as, under a microscope, we may see that living within our bodies are animalcules, smaller and simpler than us, and yet thriving even as we thrive. ... And so I say that Alchemy shall not vanish, as I always hoped. Rather, it shall be encapsulated within the new System of the World, and become a familiar and even comforting presence there, though its name may change and its practitioners speak no more about the Philosopher's Stone." (page 639)
That sort of "surround and encapsulate" concept seems broadly applicable to a lot of technology, actually.
Posted by Mark on April 03, 2011 at 07:42 PM .: link :.

End of This Day's Posts

Wednesday, March 30, 2011

Artificial Memory
Nicholas Carr cracks me up. He's a skeptic of technology, and in particular, the internet. He's the the guy who wrote the wonderfully divisive article, Is Google Making Us Stupid? The funny thing about all this is that he seems to have gained the most traction on the very platform he criticizes so much. Ultimately, though, I think he does have valuable insights and, if nothing else, he does raise very interesting questions about the impacts of technology on our lives. He makes an interesting counterweight to the techno-geeks who are busy preaching about transhumanism and the singularity. Of course, in a very real sense, his opposition dooms him to suffer from the same problems as those he criticizes. Google and the internet may not be a direct line to godhood, but it doesn't represent a descent into hell either. Still, reading some Carr is probably a good way to put techno-evangelism into perspective and perhaps reach some sort of Hegelian synthesis of what's really going on.

Otakun recently pointed to an excerpt from Carr's latest book. The general point of the article is to examine how human memory is being conflated with computer memory, and whether or not that makes sense:
...by the middle of the twentieth century memorization itself had begun to fall from favor. Progressive educators banished the practice from classrooms, dismissing it as a vestige of a less enlightened time. What had long been viewed as a stimulus for personal insight and creativity came to be seen as a barrier to imagination and then simply as a waste of mental energy. The introduction of new storage and recording media throughout the last century—audiotapes, videotapes, microfilm and microfiche, photocopiers, calculators, computer drives—greatly expanded the scope and availability of “artificial memory.” Committing information to one’s own mind seemed ever less essential. The arrival of the limitless and easily searchable data banks of the Internet brought a further shift, not just in the way we view memorization but in the way we view memory itself. The Net quickly came to be seen as a replacement for, rather than just a supplement to, personal memory. Today, people routinely talk about artificial memory as though it’s indistinguishable from biological memory.
While Carr is perhaps more blunt than I would be, I have to admit that I agree with a lot of what he's saying here. We often hear about how modern education is improved by focusing on things like "thinking skills" and "problem solving", but the big problem with emphasizing that sort of work ahead of memorization is that the analysis needed for such processes require a base level of knowledge in order to be effective. This is something I've expounded on at length in a previous post, so I won't rehash that here.

The interesting thing about the internet is that it enables you to get to a certain base level of knowledge and competence very quickly. This doesn't come without it's own set of challenges, and I'm sure Carr would be quick to point out that such a crash course would yield a false sense of security on us hapless internet users. After all, how do we know when we've reached that base level of confidence? Our incompetence could very well be masking our ability to recognize our incompetence. However, I don't think that's an insurmountable problem. Most of us that use the internet a lot view it as something of a low-trust environment, which can, ironically, lead to a better result. On a personal level, I find that what the internet really helps with is to determine just how much I don't know about a subject. That might seem like a silly thing to say, but even recognizing that your unknown unknowns are large can be helpful.

Some other assorted thoughts about Carr's excerpt:
  • I love the concept of a "commonplace book" and immediately started thinking of how I could implement one... which is when I realized that I've actually been keeping one, more or less, for the past 10 or so years on this blog. That being said, it's something I wouldn't mind becoming more organized about, and I've got some interesting ideas about what my personal take on a commonplace would look like.
  • Carr insists that the metaphor that portrays the brain as a computer is wrong. It's a metaphor I've certainly used in the past, though I think what I find most interesting about that metaphor is how different computers and brains really are. The problem with the metaphor is that our brains work nothing even remotely like the way our current computers actually work. However, many of the concepts of computer science and engineering can be useful in helping to model how the brain works. I'm certainly not an expert on the subject, but for example: You could model the brain as a binary computer because our neurons are technically binary. However, our neurons don't just turn on or off, they pulse, and things like frequency and duration can yield dramatically different results. Not to mention the fact that the brain seems to be a massively parallel computing device, as opposed to the mostly serial electronic tools we use. That is, of course, a drastic simplification, but you get the point. The metaphor is flawed, as all metaphors are, but it can also be useful.
  • One thing that Carr doesn't really get into (though he may cover this in a later chapter) is how notoriously unreliable human memory actually is. Numerous psychological studies show just how impressionable and faulty our memory of an event can be. This doesn't mean we should abandon our biological memory, just that having an external, artificial memory of an event (i.e. some sort of recording) can be useful in helping to identify and shape our perceptions.
  • Of course, even recordings can yield a false sense of truth, so things like Visual Literacy are still quite important. And again, one cannot analyze said recordings accurately without a certain base set of knowledge about what we're looking at - this is another concept that has been showing up on this blog for a while now as well: Exformation.
And that's probably enough babbling about Carr's essay. I generally disagree with the guy, but on this particular subject, I think we're more in agreement.
Posted by Mark on March 30, 2011 at 06:06 PM .: link :.

End of This Day's Posts

Wednesday, August 04, 2010

A/B Testing Spaghetti Sauce
Earlier this week I was perusing some TED Talks and ran across this old (and apparently popular) presentation by Malcolm Gladwell. It struck me as particularly relevant to several topics I've explored on this blog, including Sunday's post on the merits of A/B testing. In the video, Gladwell explains why there are a billion different varieties of Spaghetti sauce at most supermarkets:
Again, this video touches on several topics explored on this blog in the past. For instance, it describes the origins of what's become known as the Paradox of Choice (or, as some would have you believe, the Paradise of Choice) - indeed, there's another TED talk linked right off the Gladwell video that covers that topic in detail.

The key insight Gladwell discusses in his video is basically the destruction of the Platonic Ideal (I'll summarize in this paragraph in case you didn't watch the video, which covers the topic in much more depth). He talks about Howard Moskowitz, who was a market research consultant with various food industry companies that were attempting to optimize their products. After conducting lots of market research and puzzling over the results, Moskowitz eventually came to a startling conclusion: there is no perfect product, only perfect products. Moskowitz made his name working with spaghetti sauce. Prego had hired him in order to find the perfect spaghetti sauce (so that they could compete with rival company, Ragu). Moskowitz developed dozens of prototype sauces and went on the road, testing each variety with all sorts of people. What he found was that there was no single perfect spaghetti sauce, but there were basically three types of sauce that people responded to in roughly equal proportion: standard, spicy, and chunky. At the time, there were no chunky spaghetti sauces on the market, so when Prego released their chunky spaghetti sauce, their sales skyrocketed. A full third of the market was underserved, and Prego filled that need.

Decades later, this is hardly news to us and the trend has spread from the supermarket into all sorts of other arenas. In entertainment, for example, we're seeing a move towards niches. The era of huge blockbuster bands like The Beatles is coming to an end. Of course, there will always be blockbusters, but the really interesting stuff is happening in the niches. This is, in part, due to technology. Once you can fit 30,000 songs onto an iPod and you can download "free" music all over the internet, it becomes much easier to find music that fits your tastes better. Indeed, this becomes a part of peoples' identity. Instead of listening to the mass produced stuff, they listen to something a little odd and it becomes an expression of their personality. You can see evidence of this everywhere, and the internet is a huge enabler in this respect. The internet is the land of niches. Click around for a few minutes and you can easily find absurdly specific, single topic, niche websites like this one where every post features animals wielding lightsabers or this other one that's all about Flaming Garbage Cans In Hip Hop Videos (there are thousands, if not millions of these types of sites). The internet is the ultimate paradox of choice, and you're free to explore almost anything you desire, no matter how odd or obscure it may be (see also, Rule 34).

In relation to Sunday's post on A/B testing, the lesson here is that A/B testing is an optimization tool that allows you to see how various segments respond to different versions of something. In that post, I used an example where an internet retailer was attempting to find the ideal imagery to sell a diamond ring. A common debate in the retail world is whether that image should just show a closeup of the product, or if it should show a model wearing the product. One way to solve that problem is to A/B test it - create both versions of the image, segment visitors to your site, and track the results.

As discussed Sunday, there are a number of challenges with this approach, but one thing I didn't mention is the unspoken assumption that there actually is an ideal image. In reality, there are probably some people that prefer the closeup and some people who prefer the model shot. An A/B test will tell you what the majority of people like, but wouldn't it be even better if you could personalize the imagery used on the site depending on what customers like? Show the type of image people prefer, and instead of catering to the most popular segment of customer, you cater to all customers (the simple diamond ring example begins to break down at this point, but more complex or subtle tests could still show significant results when personalized). Of course, this is easier said than done - just ask Amazon, who does CRM and personalization as well as any retailer on the web, and yet manages to alienate a large portion of their customers every day! Interestingly, this really just shifts the purpose of A/B testing from one of finding the platonic ideal to finding a set of ideals that can be applied to various customer segments. Once again we run up against the need for more and better data aggregation and analysis techniques. Progress is being made, but I'm not sure what the endgame looks like here. I suppose time will tell. For now, I'm just happy that Amazon's recommendations aren't completely absurd for me at this point (which I find rather amazing, considering where they were a few years ago).
Posted by Mark on August 04, 2010 at 07:54 PM .: link :.

End of This Day's Posts

Sunday, July 04, 2010

Noted documentary filmmaker Errol Morris has been writing a series of posts about incompetence for the NY Times. The most interesting parts feature an interview with David Dunning, a psychologist whose experiments have discovered what's called the Dunning-Kruger Effect: our incompetence masks our ability to recognize our incompetence.
DAVID DUNNING: There have been many psychological studies that tell us what we see and what we hear is shaped by our preferences, our wishes, our fears, our desires and so forth. We literally see the world the way we want to see it. But the Dunning-Kruger effect suggests that there is a problem beyond that. Even if you are just the most honest, impartial person that you could be, you would still have a problem — namely, when your knowledge or expertise is imperfect, you really don’t know it. Left to your own devices, you just don’t know it. We’re not very good at knowing what we don’t know.
I found this interesting in light of my recent posting about universally self-affirming outlooks (i.e. seeing the world the way we want to see it). In any case, the interview continues:
ERROL MORRIS: Knowing what you don’t know? Is this supposedly the hallmark of an intelligent person?

DAVID DUNNING: That’s absolutely right. It’s knowing that there are things you don’t know that you don’t know. [4] Donald Rumsfeld gave this speech about “unknown unknowns.” It goes something like this: “There are things we know we know about terrorism. There are things we know we don’t know. And there are things that are unknown unknowns. We don’t know that we don’t know.” He got a lot of grief for that. And I thought, “That’s the smartest and most modest thing I’ve heard in a year.”
It may be smart and modest, but that sort of thing usually gets politicians in trouble. But most people aren't politicians, and so it's worth looking into this concept a little further. An interesting result of this effect is that a lot of the smartest, most intelligent people also tend to be somewhat modest (this isn't to say that they don't have an ego or that they can't act in arrogant ways, just that they tend to have a better idea about how much they don't know). Steve Schwartz has an essay called No One Knows What the F*** They’re Doing (or “The 3 Types of Knowledge”) that explores these ideas in some detail:
To really understand how it is that no one knows what they’re doing, we need to understand the three fundamental categories of information.

There’s the shit you know, the shit you know you don’t know, and the shit you don’t know you don’t know.
Schwartz has a series of very helpful charts that illustrate this, but most people drastically overestimate the amount of knowledge in the "shit you know" category. In fact, that's the smallest category and it is dwarfed b the shit you know you don’t know category, which is, in itself, dwarfed by the shit you don’t know you don’t know. The result is that most people who receive a lot of praise or recognition are surprised and feel a bit like a fraud.

This is hardly a new concept, but it's always worth keeping in mind. When we learn something new, we've gained some knowledge. We've put some information into the "shit we know" category. But more importantly, we've probably also taken something out of the "shit we don't know that we don't know" category and put it into the "shit we know that we don't know" category. This is important because that unknown unknowns category is the most dangerous of the categories, not the least of which is that our ignorance prevents us from really exploring it. As mentioned at the beginning of this post, our incompetence masks our ability to recognize our incompetence. In the interview, Morris references a short film he did once:
ERROL MORRIS: And I have an interview with the president of the Alcor Life Extension Foundation, a cryonics organization, on the 6 o’clock news in Riverside, California. One of the executives of the company had frozen his mother’s head for future resuscitation. (It’s called a “neuro,” as opposed to a “full-body” freezing.) The prosecutor claimed that they may not have waited for her to die. In answer to a reporter’s question, the president of the Alcor Life Extension Foundation said, “You know, we’re not stupid . . . ” And then corrected himself almost immediately, “We’re not that stupid that we would do something like that.”

DAVID DUNNING: That’s pretty good.

ERROL MORRIS: “Yes. We’re stupid, but we’re not that stupid.”

DAVID DUNNING: And in some sense we apply that to the human race. There’s some comfort in that. We may be stupid, but we’re not that stupid.
One might be tempted to call this a cynical outlook, but what it basically amounts to is that there's always something new to learn. Indeed, the more we learn, the more there is to learn. Now, if only we could invent the technology like what's presented in Diaspora (from my previous post), so we can live long enough to really learn a lot about the universe around us...
Posted by Mark on July 04, 2010 at 07:42 PM .: link :.

End of This Day's Posts

Wednesday, June 23, 2010

Internalizing the Ancient
Otaku Kun points to a wonderful entry in the Astronomy Picture of the Day series:
APOD: Milky Way Over Ancient Ghost Panel
The photo features two main elements: a nice view of the stars in the sky and a series of paintings on a canyon wall in Utah (it's the angle of the photograph and the clarity of the sky that makes it seem unreal to me, but looking at the larger version makes things a bit more clear). As OK points out, there are two corresponding kinds of antiquity here: "one cosmic, the other human". He speculates:
I think it’s impossible to really relate to things beyond human timescales. The idea of something being “ancient” has no meaning if it predates our human comprehension. The Neanderthals disappeared 30,000 years ago, which is probably really the farthest back we can reflect on. When we start talking about human forebears of 100,000 years ago and more, it becomes more abstract - that’s why it’s no coincidence that the Battlestar Galactica series finale set the events 150,000 years ago, well beyond even the reach of mythological narrative.
I'm reminded of an essay by C. Northcote Parkinson, called High Finance or The Point of Vanishing Interest (the essay appears in Parkinson's Law, a collection of essays). Parkinson writes about how finance committees work:
People who understand high finance are of two kinds: those who have vast fortunes of their own and those who have nothing at all. To the actual millionaire a million dollars is something real and comprehensible. To the applied mathematician and the lecturer in economics (assuming both to be practically starving) a million dollars is at least as real as a thousand, they having never possessed either sum. But the world is full of people who fall between these two categories, knowing nothing of millions but well accustomed to think in thousands, and it is these that finance committees are mostly comprised.
He then goes on to explore what he calls the "Law of Triviality". Briefly stated, it means that the time spent on any item of the agenda will be in inverse proportion to the sum involved. Thus he concludes, after a number of humorous but fitting examples, that there is a point of vanishing interest where the committee can no longer comment with authority. Astonishingly, the amount of time that is spent on $10 million and on $10 may well be the same. There is clearly a space of time which suffices equally for the largest and smallest sums.

In short, it's difficult to internalize numbers that high, whether we're talking about large sums of money or cosmic timescales. Indeed, I'd even say that Parkinson was being a bit optimistic. Millionaires and mathematicians may have a better grasp on the situation than most, but even they are probably at a loss when we start talking about cosmic timeframes. OK also mentions Battlestar Galactica, which did end on an interesting note (even if that finale was quite disappointing as a whole) and which brings me to one of the reasons I really enjoy science fiction: the contemplation of concepts and ideas that are beyond comprehension. I can't really internalize the cosmic information encoded in the universe around me in such a way to do anything useful with it, but I can contemplate it and struggle to understand it, which is interesting and valuable in its own right. Perhaps someday, we will be able to devise ways to internalize and process information on a cosmic scale (this sort of optimistic statement perhaps represents another reason I enjoy SF).
Posted by Mark on June 23, 2010 at 08:30 PM .: link :.

End of This Day's Posts

Sunday, May 30, 2010

Someone sent me a note about a post I wrote on the 4th Kingdom boards in 2005 (August 3, 2005, to be more precise). It was in a response to a thread about technology and consumer electronics trends, and the original poster gave two examples that were exploding at the times: "camera phones and iPods." This is what I wrote in response:
Heh, I think the next big thing will be the iPod camera phone. Or, on a more general level, mp3 player phones. There are already some nifty looking mp3 phones, most notably the Sony/Ericsson "Walkman" branded phones (most of which are not available here just yet). Current models are all based on flash memory, but it can't be long before someone releases something with a small hard drive (a la the iPod). I suspect that, in about a year, I'll be able to hit 3 birds with one stone and buy a new cell phone with an mp3 player and digital camera.

As for other trends, as you mention, I think we're goint to see a lot of hoopla about the next gen gaming consoles. The new Xbox comes out in time for Xmas this year and the new Playstation 3 hits early next year. The new playstation will probably have blue-ray DVD capability, which brings up another coming tech trend: the high capacity DVD war! It seems that Sony may actually be able to pull this one out (unlike Betamax), but I guess we'll have to wait and see...
For an off-the-cuff informal response, I think I did pretty well. Of course, I still got a lot of the specifics wrong. For instance, I'm pretty clearly talking about the iPhone, though that would have to wait about 2 years before it became a reality. I also didn't anticipate the expansion of flash memory to more usable sizes and prices. Though I was clearly talking about a convergence device, I didn't really say anything about what we now call "apps".

In terms of game consoles, I didn't really say much. My first thought upon reading this post was that I had completely missed the boat on the Wii, however, it appears that the Wii's new controller scheme wasn't shown until September 2005 (about a month after my post). I did manage to predict a winner in the HD video war though, even if I framed the prediction as a "high capacity DVD war" and spelled blu-ray wrong.

I'm not generally good at making predictions about this sort of thing, but it's nice to see when I do get things right. Of course, you could make the argument that I was just stating the obvious (which is basically what I did with my 2008 predictions). Then again, everything seems obvious in hindsight, so perhaps it is still a worthwhile exercise for me. If nothing else, it gets me to think in ways I'm not really used to... so here are a few predictions for the rest of this year:
  • Microsoft will release Natal this year, and it will be a massive failure. There will be a lot of neat talk about it and speculation about the future, but the fact is that gesture based interfaces and voice controls aren't especially great. I'll bet everyone says they'd like to use the Minority Report interface... but once they get to use it, I doubt people would actually find it more useful than current input methods. If it does attain success though, it will be because of the novelty of that sort of interaction. As a gaming platform, I think it will be a near total bust. The only way Microsoft would get Natal into homes is to bundle it with the XBox 360 (without raising the price)
  • Speaking of which, I think Sony's Playstation Move platform will be mildly more successful than Natal, which is to say that it will also be a failure. I don't see anything in their initial slate of games that makes me even want to try it out. All that being said, the PS3 will continue to gain ground against the Xbox 360, though not so much that it will overtake the other console.
  • While I'm at it, I might as well go out on a limb and say that the Wii will clobber both the PS3 and the Xbox 360. As of right now, their year in games seems relatively tame, so I don't see the Wii producing favorable year over year numbers (especially since I don't think they'll be able to replicate the success of New Super Mario Brothers Wii, which is selling obscenely well, even to this day). The one wildcard on the Wii right now is the Vitality Sensor. If Nintendo is able to put out the right software for that and if they're able to market it well, it could be a massive, audience-shifting blue ocean win for them. Coming up with a good "relaxation" game and marketing it to the proper audience is one hell of a challenge though. On the other hand, if anyone can pull that off, it's Nintendo.
  • Sony will also release some sort of 3D gaming and movie functionality for the home. It will also be a failure. In general, I think attitudes towards 3D are declining. I think it will take a high profile failure to really temper Hollywood's enthusiasm (and even then, the "3D bump" of sales seems to outweigh the risk in most cases). Nevertheless, I don't think 3D is here to stay. The next major 3D revolution will be when it becomes possible to do it without glasses (which, at that point, might be a completely different technology like holograms or something).
  • At first, I was going to predict that Hollywood would be seeing a dip in ticket sales, until I realized that Avatar was mostly a 2010 phenomenon, and that Alice in Wonderland has made about $1 billion worldwide already. Furthermore, this summer sees the release of The Twilight Saga: Eclipse, which could reach similar heights (for reference, New Moon did $700 million worldwide) and the next Harry Potter is coming in November (for reference, the last Potter film did around $930 million). Altogether, the film world seems to be doing well... in terms of sales. I have to say that from my perspective, things are not looking especially well when it comes to quality. I'm not even as interested in seeing a lot of the movies released so far this year (an informal look at my past few years indicates that I've normally seen about twice as many movies as I have this year - though part of that is due to the move of the Philly film fest to October).
  • I suppose I should also make some Apple predictions. The iPhone will continue to grow at a fast rate, though its growth will be tempered by Android phones. Right now, both of them are eviscerating the rest of the phone market. Once that is complete, we'll be left with a few relatively equal players, and I think that will lead to good options for us consumers. The iPhone has been taken to task more and more for Apple's control-freakism, but it's interesting that Android's open features are going to present more and more of a challenge to that as time goes on. Most recently, Google announced that the latest version of Android would feature the ability for your 3G/4G phone to act as a WiFi hotspot, which will most likely force Apple to do the same (apparently if you want to do this today, you have to jailbreak your iPhone). I don't think this spells the end of the iPhone anytime soon, but it does mean that they have some legitimate competition (and that competition is already challenging Apple with its feature-set, which is promising).
  • The iPad will continue to have modest success. Apple may be able to convert that to a huge success if they are able to bring down the price and iron out some of the software kinks (like multi-tasking, etc... something we already know is coming). The iPad has the potential to destroy the netbook market. Again, the biggest obstacle at this point is the price.
  • The Republicans will win more seats in the 2010 elections than the Democrats. I haven't looked close enough at the numbers to say whether or not they could take back either (or both) house of Congress, but they will gain ground. This is not a statement of political preference either way for me, and my reasons for making this prediction are less about ideology than simple voter disenfranchisement. People aren't happy with the government and that will manifest as votes against the incumbents. It's too far away from the 2012 elections to be sure, but I suspect Obama will hang on, if for no other reason than that he seems to be charismatic enough that people give him a pass on various mistakes or other bad news.
And I think that's good enough for now. In other news, I have started a couple of posts that are significantly more substantial than what I've been posting lately. Unfortunately, they're taking a while to produce, but at least there's some interesting stuff in the works.
Posted by Mark on May 30, 2010 at 09:00 PM .: link :.

End of This Day's Posts

Sunday, March 14, 2010

Remix Culture and Soviet Montage Theory
A video mashup of The Beastie Boys' popular and amusing Sabotage video with scenes from Battlestar Galactica has been making the rounds recently. It's well done, but a little on the disposable side of remix culture. The video lead Sunny Bunch to question "remix culture":
It’s quite good. But, ultimately, what’s the point?

Leaving aside the questions of copyright and the rest: Seriously…what’s the point? Does this add anything to the culture? I won’t dispute that there’s some technical prowess in creating this mashup. But so what? What does it add to our understanding of the world, or our grasp of the problems that surround us? Anything? Nothing? Is it just “there” for us to have a chuckle with and move on? Is this the future of our entertainment?
These are good questions, and I'm not surprised that the BSG Sabotage video prompted them. The implication of Sonny's post is that he thinks it is an unoriginal waste of talent (he may be playing a bit of devil's advocate here, but I'm willing to play along because these are interesting questions and because it will give me a chance to pedantically lecture about film history later in this post!) In the comments, Julian Sanchez makes a good point (based on a video he produced earlier that was referenced by someone else in the comment thread), which will be something I'll expand on later in this post:
First, the argument I’m making in that video is precisely that exclusive focus on the originality of the contribution misses the value in the activity itself. The vast majority of individual and collective cultural creation practiced by ordinary people is minimally “original” and unlikely to yield any final product of wide appeal or enduring value. I’m thinking of, e.g., people singing karaoke, playing in a garage band, drawing, building models, making silly YouTube videos, improvising freestyle poetry, whatever. What I’m positing is that there’s an intrinsic value to having a culture where people don’t simply get together to consume professionally produced songs and movies, but also routinely participate in cultural creation. And the value of that kind of cultural practice doesn’t depend on the stuff they create being particularly awe-inspiring.
To which Sonny responds:
I’m actually entirely with you on the skill that it takes to produce a video like the Brooklyn hipsters did — I have no talent for lighting, camera movements, etc. I know how hard it is to edit together something like that, let alone shoot it in an aesthetically pleasing manner. That’s one of the reasons I find the final product so depressing, however: An impressive amount of skill and talent has gone into creating something that is not just unoriginal but, in a way, anti-original. These are people who are so devoid of originality that they define themselves not only by copying a video that they’ve seen before but by copying the very personalities of characters that they’ve seen before.
Another good point, but I think Sonny is missing something here. The talents of the BSG Sabotage editor or the Brooklyn hipsters are certainly admirable, but while we can speculate, we don't necessarily know their motivations. About 10 years ago, a friend and amateur filmmaker showed me a video one of his friends had produced. It took scenes from Star Wars and Star Trek: The Wrath of Khan and recut them so it looked like the Millennium Falcon was fighting the Enterprise. It would show Han Solo shooting, then cut to the Enterprise being hit. Shatner would exclaim "Fire!" and then it would cut to a blast hitting the Millennium Falcon. And so on. Another video from the same guy took the musical number George Lucas had added to Return of the Jedi in the Special Edition, laid Wu-Tang Clan in as the soundtrack, then re-edited the video elements so everything matched up.

These videos sound fun, but not particularly original or even special in this day and age. However, these videos were made ten to fifteen years ago. I was watching them on a VHS(!) and the person making the edits was using analog techniques and equipment. It turns out that these videos were how he honed his craft before he officially got a job as an editor in Hollywood. I'm sure there were tons of other videos, probably much less impressive, that he had created before the ones I'm referencing. Now, I'm not saying that the BSG Sabotage editor or the Brooklyn Hipsters are angling for professional filmmaking jobs, but it's quite possible that they are at least exploring their own possibilities. I would also bet that these people have been making videos like this (though probably much less sophisticated) since they were kids. The only big difference now is that technology has enabled them to make a slicker experience and, more importantly, to distribute it widely.

It's also worth noting that this sort of thing is not without historical precedent. Indeed, the history of editing and montage is filled with this sort of thing. In the 1910s and 1920s, Russian filmmaker Lev Kuleshov conducted a series of famous experiments that helped express the role of editing in films. In these experiments, he would show a man with an expressionless face, then cut to various other shots. In one example, he showed the expressionless face, then cut to a bowl of soup. When prompted, audiences would claim that they found that the man was hungry. Kuleshov then took the exact same footage of the expressionless face and cut to a pretty girl. This time, audiences reported that the man was in love. Another experiment alternated between the expressionless face and a coffin, a juxtaposition that lead audiences to believe that the man was stricken with grief. This phenomenon has become known as the Kuleshov Effect.

For the current discussion, one notable aspect of these experiments is that Kuleshov was working entirely from pre-existing material. And this sort of thing was not uncommon, either. At the time, there was a shortage of raw film stock in Russia. Filmmakers had to make due with what they had, and often spent their time re-cutting existing material, which lead to what's now called Soviet Montage Theory. When D.W. Griffith's Intolerance, which used advanced editing techniques (it featured a series of cross cut narratives which eventually converged in the last reel), opened in Russia in 1919, it quickly became very popular. The Russian film community saw this as a validation and popularization of their theories and also as an opportunity. Russian critics and filmmakers were impressed by the film's technical qualities, but dismissed the story as "bourgeois", claiming that it failed to resolve issues of class conflict, and so on. So, not having much raw film stock of their own, they took to playing with Griffith's film, re-editing certain sections of the film to make it more "agitational" and revolutionary.

The extent to which this happened is a bit unclear, and certainly public exhibitions were not as dramatically altered as I'm making it out to be. However, there are Soviet versions of the movie that contained small edits and a newly filmed prologue. This was done to "sharpen the class conflict" and "anti-exploitation" aspects of the film, while still attempting to respect the author's original intentions. This was part of a larger trend of adding Soviet propaganda to pre-existing works of art, and given the ideals of socialism, it makes sense. (The preceeding is a simplification of history, of course... see this chapter from Inside the Film Factory for a more detailed discussion of Intolerance and it's impact on Russian cinema.) In the Russian film world, things really began to take off with Sergei Eisenstein and films like Battleship Potemkin. Watch that film today, and you'll be struck by how modern-feeling the editing is, especially during the infamous Odessa Steps sequence (which you'll also recognize if you've ever seen Brian De Palma's "homage" in The Untouchables).

Now, I'm not really suggesting that the woman who produced BSG Sabotage is going to be the next Eisenstein, merely that the act of cutting together pre-existing footage is not necessarily a sad waste of talent. I've drastically simplified the history of Soviet Montage Theory above, but there are parallels between Soviet filmmakers then and YouTube videomakers today. Due to limited resources and knowledge, they began experimenting with pre-existing footage. They learned from the experience and went on to grander modifications of larger works of art (Griffith's Intolerance). This eventually culminated in original works of art, like those produced by Eisenstein.

Now, YouTube videomakers haven't quite made that expressive leap yet, but it's only been a few years. It's going to take time, and obviously editing and montage are already well established features of film, so innovation won't necessarily come from that direction. But that doesn't mean that nothing of value can emerge from this sort of thing, nor does messing around with videos on YouTube limit these young artists to film. While Roger Ebert's valid criticisms are vaid, more and more, I'm seeing interactivity as the unexplored territory of art. Video games like Heavy Rain are an interesting experience and hint at something along these lines, but they are still severely limited in many ways (in other words, Ebert is probably right when it comes to that game). It will take a lot of experimentation to get to a point where maybe Ebert would be wrong (if it's even possible at all). Learning about the visual medium of film by editing together videos of pre-existing material would be an essential step in the process. Improving the technology with which to do so is also an important step. And so on.

To return back to the BSG Sabotage video for a moment, I think that it's worth noting the origins of that video. The video is clearly having fun by juxtaposing different genres and mediums (it is by no means the best or even a great example of this sort of thing, but it's still there. For a better example of something built entirely from pre-existing works, see Shining.). Battlestar Galactica was a popular science fiction series, beloved by many, and this video comments on the series slightly by setting the whole thing to an unconventional music choice (though given the recent Star Trek reboot's use of the same song, I have to wonder what the deal is with SF and Sabotage). Ironically, even the "original" Beastie Boys video was nothing more than a pastiche of 70s cop television shows. While I'm no expert, the music on Ill Communication, in general, has a very 70s feel to it. I suppose you could say that association only exists because of the Sabotage video itself, but even other songs on that album have that feel - for one example, take Sabrosa. Indeed, the Beastie Boys are themselves known for this sort of appropriation of pre-existing work. Their album Paul's Boutique infamously contains literally hundreds of samples and remixes of popular music. I'm not sure how they got away with some of that stuff, but I suppose this happened before getting sued for sampling was common. Nowadays, in order to get away with something like Paul's Boutique, you'll need to have deep pockets, which sorta defeats the purpose of using a sample in the first place. After all, samples are used in the absence of resources, not just because of a lack of originality (though I guess that's part of it). In 2004 Nate Harrison put together this exceptional video explaining how a 6 second drum beat (known as the Amen Break) exploded into its own sub-culture:

There is certainly some repetition here, and maybe some lack of originality, but I don't find this sort of thing "sad". To be honest, I've never been a big fan of hip hop music, but I can't deny the impact it's had on our culture and all of our music. As I write this post, I'm listening to Danger Mouse's The Grey Album:
It uses an a cappella version of rapper Jay-Z's The Black Album and couples it with instrumentals created from a multitude of unauthorized samples from The Beatles' LP The Beatles (more commonly known as The White Album). The Grey Album gained notoriety due to the response by EMI in attempting to halt its distribution.
I'm not familiar with Jay-Z's album and I'm probably less familiar with The White Album than I should be, but I have to admit that this combination and the artistry with which the two seemingly incompatible works are combined into one cohesive whole is impressive. Despite the lack of an official release (that would have made Danger Mouse money), The Grey Album made many best of the year (and best of the decade) lists. I see some parallels between the 1980s and 1990s use of samples, remixes, and mashups, and what was happening in Russian film in the 1910s and 1920s. There is a pattern worth noticing here: New technology enables artists to play with existing art, then apply their learnings to something more original later. Again, I don't think that the BSG Sabotage video is particularly groundbreaking, but that doesn't mean that the entire remix culture is worthless. I'm willing to bet that remix culture will eventually contribute towards something much more original than BSG Sabotage...

Incidentally, the director of the original Beastie Boys Sabotage video? Spike Jonze, who would go on to direct movies like Being John Malkovich, Adaptation., and Where the Wild Things Are. I think we'll see some parallels between the oft-maligned music video directors, who started to emerge in the film world in the 1990s, and YouTube videomakers. At some point in the near future, we're going to see film directors coming from the world of short-form internet videos. Will this be a good thing? I'm sure there are lots of people who hate the music video aesthetic in film, but it's hard to really be that upset that people like David Fincher and Spike Jonze are making movies these days. I doubt YouTubers will have a more popular style, and I don't think they'll be dominant or anything, but I think they will arrive. Or maybe YouTube videomakers will branch out into some other medium or create something entirely new (as I mentioned earlier, there's a lot of room for innovation in the interactive realm). In all honesty, I don't really know where remix culture is going, but maybe that's why I like it. I'm looking forward to seeing where it leads.
Posted by Mark on March 14, 2010 at 02:18 PM .: link :.

End of This Day's Posts

Sunday, June 28, 2009

Interrupts and Context Switching
To drastically simplify how computers work, you could say that computers do nothing more that shuffle bits (i.e. 1s and 0s) around. All computer data is based on these binary digits, which are represented in computers as voltages (5 V for a 1 and 0 V for a 0), and these voltages are physically manipulated through transistors, circuits, etc... When you get into the guts of a computer and start looking at how they work, it seems amazing how many operations it takes to do something simple, like addition or multiplication. Of course, computers have gotten a lot smaller and thus a lot faster, to the point where they can perform millions of these operations per second, so it still feels fast. The processor is performing these operations in a serial fashion - basically a single-file line of operations.

This single-file line could be quite inefficent and there are times when you want a computer to be processing many different things at once, rather than one thing at a time. For example, most computers rely on peripherals for input, but those peripherals are often much slower than the processor itself. For instance, when a program needs some data, it may have to read that data from the hard drive first. This may only take a few milliseconds, but the CPU would be idle during that time - quite inefficient. To improve efficiency, computers use multitasking. A CPU can still only be running one process at a time, but multitasking gets around that by scheduling which tasks will be running at any given time. The act of switching from one task to another is called Context Switching. Ironically, the act of context switching adds a fair amount of overhead to the computing process. To ensure that the original running program does not lose all its progress, the computer must first save the current state of the CPU in memory before switching to the new program. Later, when switching back to the original, the computer must load the state of the CPU from memory. Fortunately, this overhead is often offset by the efficiency gained with frequent context switches.

If you can do context switches frequently enough, the computer appears to be doing many things at once (even though the CPU is only processing a single task at any given time). Signaling the CPU to do a context switch is often accomplished with the use of a command called an Interrupt. For the most part, the computers we're all using are Interrupt driven, meaning that running processes are often interrupted by higher-priority requests, forcing context switches.

This might sound tedious to us, but computers are excellent at this sort of processing. They will do millions of operations per second, and generally have no problem switching from one program to the other and back again. The way software is written can be an issue, but the core functions of the computer described above happen in a very reliable way. Of course, there are physical limits to what can be done with serial computing - we can't change the speed of light or the size of atoms or a number of other physical constraints, and so performance cannot continue to improve indefinitely. The big challenge for computers in the near future will be to figure out how to use parallel computing as well as we now use serial computing. Hence all the talk about Multi-core processing (most commonly used with 2 or 4 cores).

Parallel computing can do many things which are far beyond our current technological capabilities. For a perfect example of this, look no further than the human brain. The neurons in our brain are incredibly slow when compared to computer processor speeds, yet we can rapidly do things which are far beyond the abilities of the biggest and most complex computers in existance. The reason for that is that there are truly massive numbers of neurons in our brain, and they're all operating in parallel. Furthermore, their configuration appears to be in flux, frequently changing and adapting to various stimuli. This part is key, as it's not so much the number of neurons we have as how they're organized that matters. In mammals, brain size roughly correlates with the size of the body. Big animals generally have larger brains than small animals, but that doesn't mean they're proportionally more intelligent. An elephant's brain is much larger than a human's brain, but they're obviously much less intelligent than humans.

Of course, we know very little about the details of how our brains work (and I'm not an expert), but it seems clear that brain size or neuron count are not as important as how neurons are organized and crosslinked. The human brain has a huge number of neurons (somewhere on the order of one hundred billion), and each individual neuron is connected to several thousand other neurons (leading to a total number of connections in the hundreds of trillions). Technically, neurons are "digital" in that if you were to take a snapshot of the brain at a given instant, each neuron would be either "on" or "off" (i.e. a 1 or a 0). However, neurons don't work like digital electronics. When a neuron fires, it doesn't just turn on, it pulses. What's more, each neuron is accepting input from and providing output to thousands of other neurons. Each connection has a different priority or weight, so that some connections are more powerful or influential than others. Again, these connections and their relative influence tends to be in flux, constantly changing to meet new needs.

This turns out to be a good thing in that it gives us the capability to be creative and solve problems, to be unpredictable - things humans cherish and that computers can't really do on their own.

However, this all comes with its own set of tradeoffs. With respect to this post, the most relevant of which is that humans aren't particularly good at doing context switches. Our brains are actually great at processing a lot of information in parallel. Much of it is subconscious - heart pumping, breathing, processing sensory input, etc... Those are also things that we never really cease doing (while we're alive, at least), so those resources are pretty much always in use. But because of the way our neurons are interconnected, sometimes those resources trigger other processing. For instance, if you see something familiar, that sensory input might trigger memories of childhood (or whatever).

In a computer, everything is happening in serial and thus it is easy to predict how various inputs will impact the system. What's more, when a computer stores its CPU's current state in memory, that state can be restored later with perfect accuracy. Because of the interconnected and parallel nature of the brain, doing this sort of context switching is much more difficult. Again, we know very little about how the humain brain really works, but it seems clear that there is short-term and long-term memory, and that the process of transferring data from short-term memory to long-term memory is lossy. A big part of what the brain does seems to be filtering data, determining what is important and what is not. For instance, studies have shown that people who do well on memory tests don't necessarily have a more effective memory system, they're just better at ignoring unimportant things. In any case, human memory is infamously unreliable, so doing a context switch introduces a lot of thrash in what you were originally doing because you will have to do a lot of duplicate work to get yourself back to your original state (something a computer has a much easier time doing). When you're working on something specific, you're dedicating a significant portion of your conscious brainpower towards that task. In otherwords, you're probably engaging millions if not billions of neurons in the task. When you consider that each of these is interconnected and working in parallel, you start to get an idea of how complex it would be to reconfigure the whole thing for a new task. In a computer, you need to ensure the current state of a single CPU is saved. Your brain, on the other hand, has a much tougher job, and its memory isn't quite as reliable as a computer's memory. I like to refer to this as metal inertia. This sort of issue manifests itself in many different ways.

One thing I've found is that it can be very difficult to get started on a project, but once I get going, it becomes much easier to remain focused and get a lot accomplished. But getting started can be a problem for me, and finding a few uninterrupted hours to delve into something can be difficult as well. One of my favorite essays on the subject was written by Joel Spolsky - its called Fire and Motion. A quick excerpt:
Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I've got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don't realize that it's already 7:30 pm.

Somewhere between step 8 and step 9 there seems to be a bug, because I can't always make it across that chasm. For me, just getting started is the only hard thing. An object at rest tends to remain at rest. There's something incredible heavy in my brain that is extremely hard to get up to speed, but once it's rolling at full speed, it takes no effort to keep it going.
I've found this sort of mental inertia to be quite common, and it turns out that there are several areas of study based around this concept. The state of thought where your brain is up to speed and humming along is often referred to as "flow" or being "in the zone." This is particularly important for working on things that require a lot of concentration and attention, such as computer programming or complex writing.

From my own personal experience a couple of years ago during a particularly demanding project, I found that my most productive hours were actually after 6 pm. Why? Because there were no interruptions or distractions, and a two hour chunk of uninterrupted time allowed me to get a lot of work done. Anecdotal evidence suggests that others have had similar experiences. Many people come into work very early in the hopes that they will be able to get more done because no one else is here (and complain when people are here that early). Indeed, a lot of productivity suggestions basically amount to carving out a large chunk of time and finding a quiet place to do your work.

A key component of flow is finding a large, uninterrupted chunk of time in which to work. It's also something that can be difficult to do here at a lot of workplaces. Mine is a 24/7 company, and the nature of our business requires frequent interruptions and thus many of us are in a near constant state of context switching. Between phone calls, emails, and instant messaging, we're sure to be interrupted many times an hour if we're constantly keeping up with them. What's more, some of those interruptions will be high priority and require immediate attention. Plus, many of us have large amounts of meetings on our calendars which only makes it more difficult to concentrate on something important.

Tell me if this sounds familiar: You wake up early and during your morning routine, you plan out what you need to get done at work today. Let's say you figure you can get 4 tasks done during the day. Then you arrive at work to find 3 voice messages and around a hundred emails and by the end of the day, you've accomplished about 15 tasks, none of which are the 4 you had originally planned to do. I think this happens more often than we care to admit.

Another example, if it's 2:40 pm and I know I have a meeting at 3 pm - should I start working on a task I know will take me 3 solid hours or so to complete? Probably not. I might be able to get started and make some progress, but as soon my brain starts firing on all cylinders, I'll have to stop working and head to the meeting. Even if I did get something accomplished during those 20 minutes, chances are when I get back to my desk to get started again, I'm going to have to refamiliarize myself with the project and what I had already done before proceeding.

Of course, none of what I'm saying here is especially new, but in today's world it can be useful to remind ourselves that we don't need to always be connected or constantly monitoring emails, RSS, facebook, twitter, etc... Those things are excellent ways to keep in touch with friends or stay on top of a given topic, but they tend to split attention in many different directions. It's funny, when you look at a lot of attempts to increase productivity, efforts tend to focus on managing time. While important, we might also want to spend some time figuring out how we manage our attention (and the things that interrupt it).

(Note: As long and ponderous as this post is, it's actually part of a larger series of posts I have planned. Some parts of the series will not be posted here, as they will be tailored towards the specifics of my workplace, but in the interest of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Also, given the nature of this post, it makes sense to pursue interests in my personal life that could be repurposed in my professional life (and vice/versa).)
Posted by Mark on June 28, 2009 at 03:44 PM .: link :.

End of This Day's Posts

Wednesday, January 07, 2009

Link Dump
For obvious reasons, time is a little short these days, so here are a few links I've found interesting lately:
  • Still Life - This is a rather creepy short film directed by Jon Knautz. It has a very Twilight Zoney type of feel, and a rather dark ending, but it's quite compelling. Knautz went on to make Jack Brooks: Monster Slayer... alas, that film, while containing a certain charm for the horror aficionado, isn't quite as good as this short.
  • Zero Punctuation: Assassin's Creed: I've seen some of Yahtzee's video game reviews before, but while they are certainly entertaining to watch, I've never quite known whether or not they were actually useful. It can be a lot of fun to watch someone lay the smackdown on stupid games, and Yahtzee certainly has a knack for doing that (plus he has a British accent, and us Americans apparently love to hear Brits rip into stuf), but you never really know how representative of the actual game it really is. Well, after spending a lot of time playing around with Assassin's Creed this week, I have to say that Yahtzee's review is dead on, and hilarious to boot.
  • A Batman Conversation: It's sad and in poor taste, but I bet some variant of this conversation happened quite frequently about a year ago.
  • MGK Versus His Adolescent Reading Habits: Look! I'm only like 2 months behind the curve on this one! MGK posts a bunch of parodies of book covers from famous SF and fantasy authors (I particularly enjoyed the Asimov, Heinlein, and even the Zahn one).
  • Top Ten Astronomy Pictures of 2008: Self-explanatory, but there are some pretty cool pics in here...
  • Books as Games: I realize most of my readers also read Shamus, but still, this faux-review of Snow Crash if it were created as a video game before it became a book but in the present day (it, uh, makes more sense in his post) is pretty cool.
  • "Sacred Cow Slayings" Rumored at Sony... Is PlayStation In Jeopardy?: It figures... I finally get off my butt and buy a PS3 and then rumors start appearing that Sony is about to can the program. I don't think it will happen, but this news is obviously not comforting...
  • Keanu Reeves wants to make a live-action version of Cowboy Bebop. No comment.
Posted by Mark on January 07, 2009 at 08:56 PM .: link :.

End of This Day's Posts

Sunday, November 02, 2008

At ZDNet, Robin Harris makes a mildly persuasive argument that Blu-Ray is dying and will end up becoming a videophile niche format like laserdisc. When Toshiba threw in the towel and gave up on HD-DVD about 8 months ago, it looked like a major victory for Sony on multiple fronts. First, they were the uncontested heir to the HD movie market and second, fence sitters in the next-gen gaming console market had a reason to plunk down a little extra for a PS3. But 8 months later, things haven't changed a whole lot. Standalone BR players have come down in price and will be reaching affordable levels shortly. PS3 sales received a bump, overtaking the XBox sporadically during this year, but it looks like Microsoft's price cut has reestablished PS3 as the loser of the next-gen gaming market (of course, both are being clobbered by Nintendo). Sony is betting on the release of several highly anticipated games for the PS3 this holiday season, which should sell consoles and thus increase BR market penetration.

There are lots of things to consider here:
  • Blu-Ray is better than DVD, but the difference is not as great as between DVD and VHS. One of the big issues with VHS was that the format degraded the more you watched it. DVD was thus a huge step forward in quality that would not degrade. On a personal level, as a huge movie nerd with a relatively large HDTV, I'd love a better solution for watching movies so maybe it would still be worth it for me.
  • The format war between Blu-Ray and HD-DVD really took all the steam out of the enthusiasm for HD discs. I sat on the fence during the war, and I have to admit that I really dislike Sony as a company (more on this a little later).
  • Blu-Ray was counting on the fact that standard DVDs didn't look that great on HD televisions... but they missed the advent of relatively cheap upconverting DVD players. Perhaps if the format war ended sooner, this wouldn't have been that big of a deal, but it's too late for that. I have a large DVD collection and don't need to replace most of them with BR discs because they look good on my cheapo upconverting DVD player.
  • Harris notes an interesting part of the industry: While consumers are indifferent to the format, only really large producers can afford to release discs in the format. Harris has the details in his article, but it doesn't seem likely that we'll see a lot of small indie or foreign flicks on Blu-Ray unless the price of producing discs comes down significantly. As a movie nerd, this hurts. Hopefully, things would improve if market share increased.
  • While standalone BR players are coming down in price, Sony has repeatedly stated that the PS3 is not (at least, not for the upcoming holiday season, which is when you'd expect it). Sony is counting on their upcoming slate of games to drive sales. This is interesting since the two other next-gen gaming consoles both cost around half of what the PS3 costs. Gaming consoles have the time-honored tradition of selling their console at a loss so that they can pick up market share and make a boatload on games. The PS3 seems to be attempting to buck that trend. This may be because they were too ambitious with their system... I bet they're already losing lots even at the $400 price point. For a variety of reasons, the PS3 is the only BR player I'm really considering. I like video games and from what I've seen, the PS3 is probably the best BR player out there anyway.
  • The current economic woes do not bode well for BR. If we weren't looking at a 2 year recession (at least), then maybe Sony's bullish attitude would be warranted. As it stands, I'm at little confused by their strategy here. They're attempting to wring every last dollar out of every angle. High console prices, high authoring costs and high disc prices make it difficult to really get behind this format.
  • On the plus side, if BR doesn't work out and HD downloads become the way of the future, the PS3 has that built in as well... Of course, they'll have to work out some of the bugs in that system, like the dumb DRM scheme that does not allow you to redownload movies you purchased. DRM plays a big role in why I absolutely hate Sony, so it's distressing to see that they still don't get it. But then, most downloadable movie services have similar issues. That is the one big hurdle downloads will have to clear before going mainstream... and given the way things have gone so far, that's probably going to be a challenge.
  • As a Netflix customer, it's mildy annoying that I'd have to pay a surcharge to be able to rent BR discs. It's an understandable position on Netflix's part - the format is more expensive and the amount of BR customers is low - but it's still annoying.
  • One advantage of the PS3 over the XBox is that their online component is free, while you have to pay for XBox Live. On the other hand, XBox Live is by all accounts much better than PS3's online offering, and the PS3 network's terms of service seem to indicate that they really just don't get it.
  • It's only been 8 months since the death of HD-DVD. Perhaps everyone is being a bit too harsh jumping on BR. Sales have been steady, just not stellar. And it turns out that HD-DVD wasn't the only challenge that BR faced. You've got upconverting DVDs, HD Downloads, and now a bad economy to overcome. It's no wonder BR hasn't dominated.
All of that said, I'm still considering a PS3 system. Perhaps that means that the format isn't dying after all... or perhaps it just means that I'm a niche videophile customer. While Sony doesn't seem to be considering price cuts, I'm hoping for some sort of holiday deals. Last year, Moriarty picked up a PS3 and got 15 free movies along with it... Now that the format war is over, I doubt we'll see anything that extreme this year, but something along those lines would definitely get me interested.
Posted by Mark on November 02, 2008 at 01:02 PM .: link :.

End of This Day's Posts

Wednesday, September 24, 2008

The Moon
A few years ago, The Onion put out a book called Our Dumb Century. It was comprised of a series of newspaper front pages, one from each year. It was an interesting book, in part because of the events they chose to represent each year and also because The Onion writers are hilarious. The most brilliant entry in the book was from the 1969 edition of the paper:

Newspaper from 1969: Holy Shit, Man Walks on Fucking Moon

Utterly brilliant. You can't read it on that small copy, but there's a whole profanity-laden exchange between Houston and Tranquility Base that's also hysterically funny. As it turns out, The Onion folks went ahead and made a video, complete with archival footage and authentic sounding voices, beeps, static, etc... Incredibly funny. [video via Need Coffee]

Update: Weird, I tried to embed the video in this post, but when you click play it says it's no longer available... but if you go directly to youtube, you can get the video. I'm taking out the embedded video and putting in the link for now.
Posted by Mark on September 24, 2008 at 10:04 PM .: link :.

End of This Day's Posts

Sunday, May 11, 2008

Link Dump: Space!
Time is short, so just a few space themed links for you:
  • Space Station Movie Night: A while back, NASA released the International Space Station's daily logs. Most of the entries are rather dry and technical, but the astronauts sometimes logged what movies they were watching, and Scott David Herman decided to collect all of them in a post. Some highlights:
    24 NOV 2000: Watched disk 1 of "Apocalypse Now". Shep tried to explain why Robert Duvall is always wearing the black cavalry hat, but being a Navy guy, he's not sure he understands it either.

    29 DEC 2000: Let the real "Space Odyssey 2001" proceed.

    5 JAN 2001: Finished the 2nd disk of "2010". Something strange about watching a movie about a space expedition when you're actually on a space expedition.

    26 JAN 2001: We eat dinner and watch "GI Jane". Lots of SEAL questions, and Shep explains why this is not exactly like the real SEAL training.

    6 FEB 2001: We ate some dinner and watched the last part of "City of Angels". Shep did his best to explain to Yuri and Sergei what the phrase "chick flick" means.
    Interestingly, they seem to be watching movies on CDs and dont get a DVD player until 2001 or so. Anyway, lots more there. Interesting stuff.
  • Amazing Photos of the NASA Space Shuttle: A series of photos showing how the Space Shuttle and its rocket boosters are assembled in preparation for a launch.
  • Cities at Night: The View from Space: Amazing photos of cities taken from the ISS on the dark side of the planet. You get an interesting view of each city, and the overall density of human development by looking at these photos. I remember seeing something like this world map a while back, and there are many telling observations you could make about human development (observe the difference between North and South Korea, for instance), but you don't get much detail from that. These photos are great. See also this video detailing how the shots were taken and taking a tour around the world... [video via K-Squared Ramblings]
  • The Earth and the Moon as seen from Mars: An interesting perspective. Ever notice in TV shows or movies that whenever you see a planet, you're almost always seeing the full planet in direct view of the sun (i.e. the "light side" of the planet, with none of the dark side visible). [via Kottke]
That's all for now...
Posted by Mark on May 11, 2008 at 09:57 PM .: link :.

End of This Day's Posts

Sunday, April 27, 2008

Netflix Activity
The recent bout with myTV on DVD addiction necessitated an increase in Netflix usage, which made me curious. How well have I really taken advantage of the Netflix service, and is it worth the monthly expense?

If I were to rent a movie at a local video store like Blockbuster, each rental would cost somewhere around $4 (this is an extremely charitable estimate, as I'm sure it's probably closer to $5 at this point), plus the expense in time and effort (I mean, come on, I'd have to drive about a mile out of my way to go to one of these places!) Netflix costs me $15.99 a month for the 3-disc-at-a-time plan (this plan was $17.99 when I signed up, but decreased in price two times during around two years of membership), so it takes about 4-5 Netflix rentals to recoup my costs and bring the price of an average rental down below $4. I've been a member for one year and ten months... how did I do (click for a larger version)?

My Netflix Activity Chart

A few notes on the data:
  • The chart shows both DVD rentals and movies or shows watched online through Netflix's "Watch Instant" service. There are certain distinctions that should be made here, namely that DVD rentals are measured by the date the DVD was returned, while Watch Instant rentals are measured when you watch them. Also, when watching a TV series on Watch Instant, each episode counts as a separate rental (if I were watching on DVD, there's usually 3-4 episodes on one disc, but since I'm watching on the Watch Instant service, each episode counts as a separate rental).
  • As you can see, my initial usage was a little erratic, though I apparently tend to fall into a 4-5 month pattern (and you can see two nearly identical curves in 2007) where DVD rentals range from 6-13 per month. 13 appears to be my ceiling for a month, though I've hit that several times.
  • I've only fallen below the 4 disc per month ratio needed to bring the average rental down below $4 once (twice if you count July 2006, but that was my first month of service and does not constitute a full month's worth of data). To be honest, I don't remember why I only returned 2 movies in January 2007, but that was the first and only time I fell below the necessary 4 rentals.
  • My Watch Instantly service usage started off with a bang in July 2007 but quickly trailed off until 2008, when usage skyrocketed. This is when I discovered the TV show Dexter and quickly worked my way through all of the first season episodes (13 in all). Following Dexter, I started in on Ghost in the Shell: Stand Alone Complex and I just finished that today (expect a review later this week), so that means I watched 26 episodes online. Expect this to drop sharply next month (though I still plan on using it significantly, as I'll be following along with Filmspotting's 70's SF marathon, which features several movies in the Watch Instantly catalog). All in all, it's a reasonable service, though I have to admit that watching it on my computer just isn't the same - I bought that 50" widescreen HDTV for a reason, you know...
  • You'll also notice that both March and April of 2008 have me hitting the ceiling of 13 movies per month. This is the first time I've done that in consecutive months and is largely due to watching BSG season 3 and my discovery and addiction to The Wire.
  • As of April 2008, I'm averaging 9 movies a month (I've rented 198 DVDs). Even if I were to use my original price of $17.99 a month, that works out to around $2 a DVD rental. When you factor in the price drops and the Watch Instantly viewing (I've watched 51 things, though again, in some cases what I'm watching is a single episode of a TV show), I'm betting it would come out around $1.50-$1.75.
So it seems that the service is definitely worth the money and is indeed saving me a lot. Plus, Netflix has a far greater selection than any local video store (with the potential exception of TLA Video, but they're too far from my home to count), thus allowing me to indulge in various genres that you don't see much of in a typical video store. The only potential downside to Netflix is that you can't really rent something on impulse (unless it's on the Watch Instantly service). There are also times when new or popular movies take some time before they're actually available to you, but you have to contend with that from video rental stores as well. Indeed, I can only think of 3-4 times I've had to wait for a movie (this is mostly due to the fact that I tend to rent more obscure fare where people aren't exactly lining up to see it...) For the most part, Netflix has been reliable as well, almost always turning around my returns in short order (I mail it one day, and get the next films two days later). There have been a few mixups and I do remember one movie that wasn't available on the east coast and had to be shipped from California, so it came after a wait of 3-4 days, but for the most part, I'm very happy with the service.

This has been an interesting exercise, because I feel like I'm a little more consistent than the data actually shows. I'm really surprised that there are several months where my rentals went down to 6... I could have sworn I watched at least 2-3 discs a week, with the occasional exception. Still, an average of 9 movies a month is nothing to sneeze about, I guess. I've heard horror stories of where Netflix will start throttling you and take longer to deliver discs if you go above a certain amount of rentals per month (at a certain point, the cost of processing your rentals becomes more than you're paying, which I guess is what prompts Netflix to start throttling you), but I haven't had a problem yet. If I keep up my recent viewing habits though, this could change...
Posted by Mark on April 27, 2008 at 11:09 PM .: link :.

End of This Day's Posts

Wednesday, December 05, 2007

Rhetorical Strategy
Every so often, I see someone who is genuinely concerned with reaching the unreachable. Whether it be scientists who argue about how to frame their arguments, alpha-geek programmers who try to figure out how to reach typical, average programmers, or critics who try to open a dialogue with feminists. Debates tend to polarize, and when it comes to politics or religion, assumptions of bad faith on both sides tend to derail discussions pretty quickly.

How do you reach the unreachable? Naturally, the topic is much larger than a single blog entry, but I did run accross an interesting post by Jon Udell that outlines Charles Darwin's rhetorical strategy in the book, On the Origin of Species (which popularized the theory of evolution).
Darwin, says Slatkin, was like a salesman who finds lots of little ways to get you to say yes before you're asked to utter the big yes. In this case, Darwin invited people to affirm things they already knew, about a topic much more familiar in their era than in ours: domestic species. Did people observe variation in domestic species? Yes. And as Darwin piles on the examples, the reader says, yes, yes, OK, I get it, of course I see that some pigeons have longer tail feathers. Did people observe inheritance? Yes. And again, as he piles on the examples, the reader says yes, yes, OK, I get it, everyone knows that that the offspring of longer-tail-feather pigeons have longer tail feathers.

By the time Darwin gets around to asking you to say the big yes, it's a done deal. You've already affirmed every one of the key pillars of the argument. And you've done so in terms of principles that you already believe, and fully understand from your own experience.

It only took a couple of years for Darwin to formulate the idea of evolution by natural selection. It took thirty years to frame that idea in a way that would convince other scientists and the general public. Both the idea, and the rhetorical strategy that successfully communicated it, were great innovations.
I think Udell simplifies the inception and development of the idea of evolution, but I think the point generally holds. Darwin's ideas didn't come into mainstream prominence until he published his book, decades after he had begun his work. Obviously, Darwin's strategy isn't applicable in every situation, but it is an interesting place to start (I suppose we should keep in mind that evolution is still controversial amongst the mainstream)...
Posted by Mark on December 05, 2007 at 08:29 PM .: link :.

End of This Day's Posts

Wednesday, November 28, 2007

Facial Expressions and the Closed Eye Syndrome
I've been reading Malcolm Gladwell's book, Blink, and one of the chapters focuses on the psychology of facial expressions. Put simply, we wear our emotions on our face, and some enterprising psychologists took to mapping the distinct muscular movements that the human face can make. It's an interesting process, and it turns out that people who learn these facial expressions (of which there are many) are eerily good at recognizing what people are really thinking, even if they aren't trying to show it. It's almost like mind reading, and we all do it to some extent or another (mostly, we do it unconsciously). Body language and facial expressions are packed with information, and we'd all be pretty much lost without that kind of feedback (perhaps why misunderstandings are more common on the phone or in email). Most of the time, our expressions are voluntary, but sometimes they're not. Even if we're trying to suppress our expressions, a fleeting look may cross our faces. Often, these "micro-expressions" last only a few milliseconds and are imperceptible, but when trained psychologists watch video of, say, Harold "Kim" Philby (a notorious soviet spy) giving a press conference, they're able to read him like a book (slow motion helps).

I found this example interesting, and it highlights some of the subtle differences that can exist between expressions (in this case, between a voluntary and involuntary expression):
If I were to ask you to smile, you would flex your zygomatic major. By contrast, if you were to smile spontaneously, in the presence of genuine emotion, you would not only flex your zygomatic but also tighten the orbicularis oculi, pars orbitalis, which is the muscle that encircles the eye. It is almost impossible to tighten the orbicularis oculi, pars orbitalis on demand, and it is equally difficult to stop it from tightening when we smile at something genuinely pleasurable.
I found that interesting in light of the Closed Eye Syndrome I noticed in Anime. I wonder how that affects the way we perceive Anime. If a smiling mouth by itself means a fake expression of happiness while a smiling mouth and closed eyes means genuine emotion, does that make the animation more authentic? Animation obviously doesn't have the fidelity of video or film, but we can obviously read expressions from animated faces, so I would expect that closed eye syndrome exists more because of accuracy than anything else. In my original post on the subject, Roy noted that the reason I noticed closed eyes in anime could have something to do with the way Japan and the US read emotion. He pointed to an article that claimed Americans focus more on the mouth while the Japanese focus more on the eyes when trying to read emotions from facial expressions. One example from the article was emoticons. For happiness, Americans use a smily face :) while the Japanese tend to use ^_^ (which seems to be a face with eyes closed). That might still be part of it, but ever since I made the observation, I've noticed similar expressions in American animation (I just recently noticed it a lot in a Venture Bros. episode). Still, occurrences in American animation seem less frequent (or perhaps less obvious), so perhaps the observation still holds.

Gladwell's book is interesting, as expected, though I'm not sure yet if he has a point other than to observe that we do a lot of subconscious analysis and make lots of split decisions, and sometimes this is good (other times it's not). Still, he's good at finding examples and drilling down into the issue, and even if I'm not sure about his conclusions, it's always fun to read. There's lots more on this subject in the book (for instance, he goes over how facial expressions and our emotions are a two way phenomenon - meaning that if you intentionally contort your face in an specific way, you can induce certain emotions. The psychologists I mentioned earlier who were mapping expressions noticed that after a full day of trying to manipulate their facial muscles to show anger (even though they weren't angry) they felt horrible. Some tests have been done to confirm that, indeed, our facial expressions are linked directly to our brain) and it's probably worth a read if that's your bag.
Posted by Mark on November 28, 2007 at 08:19 PM .: link :.

End of This Day's Posts

Sunday, November 25, 2007

Requiem for a Meme
In July of this year, I attempted to start a Movie Screenshot Meme. The idea was simple and (I thought) neat. I would post a screenshot, and visitors would guess what movie it was from. The person who guessed correctly would continue the game by either posting the next round on their blog, or if they didn't have a blog, they could send me a screenshot or just ask me to post another round. Things went reasonably well at first, and the game experienced some modest success. However, the game eventually morphed into the Mark, Alex, and Roy show, as the rounds kept cycling through each of our blogs. The last round was posted in September and despite a winning entry, the game has not continued.

The challenge of starting this meme was apparent from the start, but there were some other things that hindered the game a bit. Here are some assorted thoughts about the game, what held it back, and what could be done to improve the chances of adoption.
  • Low Traffic: The most obvious reason the game tapered off was that my blog doesn't get a ton of traffic. I have a small dedicated core of visitors though, and I think that's why the game lasted as long as it did. Still, the three blogs that comprised the bulk of rounds in the game weren't very high traffic blogs. As such, the pool of potential participants was relatively small, which is the sort of thing that would make it difficult for a meme to expand.
  • Barriers to Entry: The concept of allowing the winner to continue the game on their blog turned out to be a bit prohibitive, as most visitors don't have a blog. Also, a couple of winners expressed confusion as to how to get screenshots, and some didn't respond at all after winning. Of course, it is easy to start a new blog, and my friend Dave even did so specifically to post his round of the game, but none of these things helped get more eyes looking at the game.
  • Difficulty: I intentionally made my initial entries easy (at one point, I even considered making it obscenely easy, but decided to just use that screenshot as a joke), in an attempt to ensnare casual movie viewers, but as the game progressed, screenshots became more and more difficult, and were coming from obscure movies. Actually, if you look at most of the screenshots outside of my blog, there aren't many mainstream movies. Here are some of the lesser known movies featured in the game Hedwig and the Angry Inch (this one stumped the interwebs), The Big Tease, Rosencrantz & Guildenstern Are Dead, Children of Men (mainstream, I guess, though I'm pretty sure it wasn't even out on DVD yet), Cry-Baby, Brotherhood of the Wolf, The City of Lost Children, Everything Is Illuminated, Wings of Desire, Who Framed Roger Rabbit (mainstream), Run, Lola, Run, Masters of the Universe (!), I Heart Huckabees, and Runaway. Now, of the ones I've seen, none of these are terrible films (er, well, He-Man was pretty bad, as was Runaway, but they're 80s movies, so slack is to be cut, right?), but they're also pretty difficult to guess for a casual movie watcher. I mean, most are independent, several are foreign, and it doesn't help when the screenshot is difficult to place (even some of the mainstream ones, like Who Framed Roger Rabbit, were a little difficult). Heck, by the end, even I was posting difficult stuff (the 5 screenshot extravaganza featured a couple of really difficult ones). Again, there's nothing inherently wrong with these movie selections, but they're film-geek selections that pretty much exclude mainstream viewers. If the game had become more widespread, this wouldn't have been as big of a deal, as I'd imagine that more movie geeks would be attracted to it. This is an interesting issue though, as several people thought their screenshots were easy, even though their visitors thought they were hard. Movies are subjective, so I guess it can be hard to judge the difficulty of a given screenshot. A screenshot that is blatantly obvious to me might be oppressively difficult to someone else.
  • Again Traffic: Speaking of which, once the game had made its way around most of my friends' blogs, things began to slow down a bit because we were all hoping that someone new would win a round. Several non-bloggers posted comments to the effect of: I know the answer, but I don't have a blog and I want this game to spread so I'll hold off for now. I know I held back on several rounds because of this, but as the person who started this whole thing, this is understandable. In some ways, it was nice to see other people enjoying the game enough to care about it's success, but that also didn't help a whole lot.
  • Detectives: At least a couple of people were able to find answers by researching rather than recognizing the movie. I know I was guilty of this. I'd recognize an actor, then look them up on IMDB and see what they've done, which helps narrow down the field considerably. I don't know that this is actually a bad thing, but I did find it interesting.
  • Memerific: The point of a meme is that it's supposed to be self-sustaining and self-propagating. While this game did achieve a modest success at the beginning, it never really became self-sustaining. At least a couple of times, I prodded the game to move it forward, and Roy and Alex did the same. I guess the memetic inertia was constantly being worn down by the factors discussed in this post.
  • Help: Given the above, there were several things that could have helped. I could have done a better job promoting the game, for instance. I could have made it easier for other bloggers to post a round. One of the things I wanted to do was create little javascript snippits that people could use to very quickly display the unweildy rules (perhaps using nifty display techniques that hide most of the text initially until you click to learn more) and another little javascript that would display the current round (in a nice little graphical button or something). Unfortunately, this game pretty much coincided with the busiest time of my professional career, and I didn't have a lot of time to do anything (just keeping up with the latest round was a bit of a challenge for me).
  • Variants: One thing that may have helped would be to spread the game further out by allowing winners to "tag" other bloggers they wanted to see post screenshots, rather than just letting the winner post their own. I actually considered this when designing the game, but after some thought, I decided against it. Many people hate memes and don't like being "tagged" to participate. Knowing this, a lot of people who do participate in memes are hesitant to "tag" other people. I didn't want to annoy people with the blogging equivalent to chain letters, so I decided against it. However, it might have helped this meme spread out much further, as it doesn't require casual movie fans to participate more and it would allow the meme to spread much further, much faster. If I said the winner should tag 5 other bloggers to participate, the meme could spread exponentially. This would be much more difficult to track, but on the other hand, it might actually catch on. This might be the biggest way to improve the meme's chances at survival.
  • Alternatives: This strikes me as something that would work really well on a message board type system, especially one that allowed users to upload their own images. Heck, I wouldn't be surprised to see something like this out there. It also might have been a good idea to create a way to invite others to play the game via email (which probably would only work on a message board or dedicated website, where there's one central place that screenshots are posted). However, one of the things that's neat about blog memes is that they tend to get your blog exposed to people who wouldn't otherwise visit.
It was certainly an interesting and fun experience, and I'm glad I did it. Just for kicks, I'll post another screenshot. Feel free to post your answer in the comments, but I'm not especially expecting this to progress much further than it did before (though anything's possible):

Screenshot Game, round 24

(click image for a larger version) I'd say this is difficult except that it's blatantly obvious who that is in the screenshot. It shouldn't be that hard to pick out the movie even if you haven't seen it. What the heck, the winner of this round can pick 5 blogs they'd like to see post a screenshot and post a screenshot on their blog if they desire. As I mentioned above, I'm hesitant to annoy people with this sort of thing, but hey, why not? Let's give this meme some legs.
Posted by Mark on November 25, 2007 at 03:04 PM .: link :.

End of This Day's Posts

Sunday, November 18, 2007

The Paradise of Choice?
A while ago, I wrote a post about the Paradox of Choice based on a talk by Barry Schwartz, the author of a book by the same name. The basic argument Schwartz makes is that choice is a double-edged sword. Choice is a good thing, but too much choice can have negative consequences, usually in the form of some kind of paralysis (where there are so many choices that you simply avoid the decision) and consumer remorse (elevated expectations, anticipated regret, etc...). The observations made by Schwartz struck me as being quite astute, and I've been keenly aware of situations where I find myself confronted with a paradox of choice ever since. Indeed, just knowing and recognizing these situations seems to help deal with the negative aspects of having too many choices available.

This past summer, I read Chris Anderson's book, The Long Tail, and I was a little pleasantly surprised to see a chapter in his book titled "The Paradise of Choice." In that chapter, Anderson explicitely addresses Schwartz's book. However, while I liked Anderson's book and generally agreed with his basic points, I think his dismissal of the Paradox of Choice is off target. Part of the problem, I think, is that Anderson is much more concerned with the choices rather than the consequences of those choices (which is what Schwartz focuses on). It's a little difficult to tell though, as Anderson only dedicates 7 pages or so to the topic. As such, his arguments don't really eviscerate Schwartz's work. There are some good points though, so let's take a closer look.

Anderson starts with a summary of Schwartz's main concepts, and points to some of Schwartz's conclusions (from page 171 in my edition):
As the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize.
Now, the way Anderson presents this is a bit out of context, but we'll get to that in a moment. Anderson continues and then responds to some of these points (again, page 171):
As an antidote to this poison of our modern age, Schwartz recommends that consumers "satisfice," in the jargon of social science, not "maximize". In other words, they'd be happier if they just settled for what was in front of them rather than obsessing over whether something else might be even better. ...

I'm skeptical. The alternative to letting people choose is choosing for them. The lessons of a century of retail science (along with the history of Soviet department stores) are that this is not what most consumers want.
Anderson has completely missed the point here. Later in the chapter, he spends a lot of time establishing that people do, in fact, like choice. And he's right. My problem is twofold: First, Schwartz never denies that choice is a good thing, and second, he never advocates removing choice in the first place. Yes, people love choice, the more the better. However, Schwartz found that even though people preferred more options, they weren't necessarily happier because of it. That's why it's called the paradox of choice - people obviously prefer something that ends up having negative consequences. Schwartz's book isn't some sort of crusade against choice. Indeed, it's more of a guide for how to cope with being given too many choices. Take "satisficing." As Tom Slee notes in a critique of this chapter, Anderson misstates Schwartz's definition of the term. He makes it seem like satisficing is settling for something you might not want, but Schwartz's definition is much different:
To satisfice is to settle for something that is good enough and not worry about the possibility that there might be something better. A satisficer has criteria and standards. She searches until she finds an item that meets those standards, and at that point, she stops.
Settling for something that is good enough to meet your needs is quite different than just settling for what's in front of you. Again, I'm not sure Anderson is really arguing against Schwartz. Indeed, Anderson even acknowledges part of the problem, though he again misstate's Schwartz's arguments:
Vast choice is not always an unalloyed good, of course. It too often forces us to ask, "Well, what do I want?" and introspection doesn't come naturally to all. But the solution is not to limit choice, but to order it so it isn't oppressive.
Personally, I don't think the problem is that introspection doesn't come naturally to some people (though that could be part of it), it's more that some people just don't give a crap about certain things and don't want to spend time figuring it out. In Schwartz's talk, he gave an example about going to the Gap to buy a pair of jeans. Of course, the Gap offers a wide variety of jeans (as of right now: Standard Fit, Loose Fit, Boot Fit, Easy Fit, Morrison Slim Fit, Low Rise Fit, Toland Fit, Hayes Fit, Relaxed Fit, Baggy Fit, Carpenter Fit). The clerk asked him what he wanted, and he said "I just want a pair of jeans!"

The second part of Anderson's statement is interesting though. Aside from again misstating Schwartz's argument (he does not advocate limiting choice!), the observation that the way a choice is presented is important is interesting. Yes, the Gap has a wide variety of jean styles, but look at their website again. At the top of the page is a little guide to what each of the styles means. For the most part, it's helpful, and I think that's what Anderson is getting at. Too much choice can be oppressive, but if you have the right guide, you can get the best of both worlds. The only problem is that finding the right guide is not as easy as it sounds. The jean style guide at Gap is neat and helpful, but you do have to click through a bunch of stuff and read it. This is easier than going to a store and trying all the varieties on, but it's still a pain for someone who just wants a pair of jeans dammit.

Anderson spends some time fleshing out these guides to making choices, noting the differences between offline and online retailers:
In a bricks-and-mortar store, products sit on the shelf where they have been placed. If a consumer doesn't know what he or she wants, the only guide is whatever marketing material may be printed on the package, and the rough assumption that the product offered in the greatest volume is probably the most popular.

Online, however, the consumer has a lot more help. There are a nearly infinite number of techniques to tap the latent information in a marketplace and make that selection process easier. You can sort by price, by ratings, by date, and by genre. You can read customer reviews. You can compare prices across products and, if you want, head off to Google to find out as much about the product as you can imagine. Recommendations suggest products that 'people like you' have been buying, and surprisingly enough, they're often on-target. Even if you know nothing about the category, ranking best-sellers will reveal the most popular choice, which both makes selection easier and also tends to minimize post-sale regret. ...

... The paradox of choice is simply and artifact of the limitations of the physical world, where the information necessary to make an informed choice is lost.
I think it's a very good point he's making, though I think he's a bit too optimistic about how effective these guides to buying really are. For one thing, there are times when a choice isn't clear, even if you do have a guide. Also, while I think retailers that offer Recommendations based on what other customer purchases are important and helpful, who among us hasn't seen absurd recommendations? From my personal experience, a lot of people don't like the connotations of recommendations either (how do they know so much about me? etc...). Personally, I really like recommendations, but I'm a geek and I like to figure out why they're offering me what they are (Amazon actually tells you why something is recommended, which is really neat). In any case, from my own personal anecdotal observations, no one puts much faith in probablistic systems like recommendations or ratings (for a number of reasons, such as cheating or distrust). There's nothing wrong with that, and that's part of why such systems are effective. Ironically, acknowledging their imperfections allow users to better utilize the systems. Anderson knows this, but I think he's still a bit too optimistic about our tools for traversing the long tail. Personally, I think they need a lot of work.

When I was younger, one of the big problems in computing was storage. Computers are the perfect data gatering tool, but you need somewhere to store all that data. In the 1980s and early 1990s, computers and networks were significantly limited by hardware, particularly storage. By the late 1990s, Moore's law had eroded this deficiency significantly, and today, the problem of storage is largely solved. You can buy a terrabyte of storage for just a couple hundred dollars. However, as I'm fond of saying, we don't so much solve problems as trade one set of problems for another. Now that we have the ability to store all this information, how do we get at it in a meaninful way? When hardware was limited, analysis was easy enough. Now, though, you have so much data available that the simple analyses of the past don't cut it anymore. We're capturing all this new information, but are we really using it to its full potential?

I recently caught up with Malcolm Gladwell's article on the Enron collapse. The really crazy thing about Enron was that they didn't really hide what they were doing. They fully acknowledged and disclosed what they were doing... there was just so much complexity to their operations that no one really recognized the issues. They were "caught" because someone had the persistence to dig through all the public documentation that Enron had provided. Gladwell goes into a lot of detail, but here are a few excerpts:
Enron's downfall has been documented so extensively that it is easy to overlook how peculiar it was. Compare Enron, for instance, with Watergate, the prototypical scandal of the nineteen-seventies. To expose the White House coverup, Bob Woodward and Carl Bernstein used a source-Deep Throat-who had access to many secrets, and whose identity had to be concealed. He warned Woodward and Bernstein that their phones might be tapped. When Woodward wanted to meet with Deep Throat, he would move a flower pot with a red flag in it to the back of his apartment balcony. That evening, he would leave by the back stairs, take multiple taxis to make sure he wasn't being followed, and meet his source in an underground parking garage at 2 A.M. ...

Did Jonathan Weil have a Deep Throat? Not really. He had a friend in the investment-management business with some suspicions about energy-trading companies like Enron, but the friend wasn't an insider. Nor did Weil's source direct him to files detailing the clandestine activities of the company. He just told Weil to read a series of public documents that had been prepared and distributed by Enron itself. Woodward met with his secret source in an underground parking garage in the hours before dawn. Weil called up an accounting expert at Michigan State.

When Weil had finished his reporting, he called Enron for comment. "They had their chief accounting officer and six or seven people fly up to Dallas," Weil says. They met in a conference room at the Journal's offices. The Enron officials acknowledged that the money they said they earned was virtually all money that they hoped to earn. Weil and the Enron officials then had a long conversation about how certain Enron was about its estimates of future earnings. ...

Of all the moments in the Enron unravelling, this meeting is surely the strangest. The prosecutor in the Enron case told the jury to send Jeffrey Skilling to prison because Enron had hidden the truth: You're "entitled to be told what the financial condition of the company is," the prosecutor had said. But what truth was Enron hiding here? Everything Weil learned for his Enron expose came from Enron, and when he wanted to confirm his numbers the company's executives got on a plane and sat down with him in a conference room in Dallas.
Again, there's a lot more detail in Gladwell's article. Just how complicated was the public documentation that Enron had released? Gladwell gives some examples, including this one:
Enron's S.P.E.s were, by any measure, evidence of extraordinary recklessness and incompetence. But you can't blame Enron for covering up the existence of its side deals. It didn't; it disclosed them. The argument against the company, then, is more accurately that it didn't tell its investors enough about its S.P.E.s. But what is enough? Enron had some three thousand S.P.E.s, and the paperwork for each one probably ran in excess of a thousand pages. It scarcely would have helped investors if Enron had made all three million pages public. What about an edited version of each deal? Steven Schwarcz, a professor at Duke Law School, recently examined a random sample of twenty S.P.E. disclosure statements from various corporations-that is, summaries of the deals put together for interested parties-and found that on average they ran to forty single-spaced pages. So a summary of Enron's S.P.E.s would have come to a hundred and twenty thousand single-spaced pages. What about a summary of all those summaries? That's what the bankruptcy examiner in the Enron case put together, and it took up a thousand pages. Well, then, what about a summary of the summary of the summaries? That's what the Powers Committee put together. The committee looked only at the "substance of the most significant transactions," and its accounting still ran to two hundred numbingly complicated pages and, as Schwarcz points out, that was "with the benefit of hindsight and with the assistance of some of the finest legal talent in the nation."
Again, Gladwell's article has a lot of other details and is a fascinating read. What interested me the most, though, was the problem created by so much data. That much information is useless if you can't sift through it quickly or effectively enough. Bringing this back to the paradise of choice, the current systems we have for making such decisions are better than ever, but still require a lot of improvement. Anderson is mostly talking about simple consumer products, so none are really as complicated as the Enron case, but even then, there are still a lot of problems. If we're really going to overcome the paradox of choice, we need better information analysis tools to help guide us. That said, Anderson's general point still holds:
More choice really is better. But now we know that variety alone is not enough; we also need information about that variety and what other consumers before us have done with the same choices. ... The paradox of choice turned out to be more about the poverty of help in making that choice than a rejection of plenty. Order it wrong and choice is oppressive; order it right and it's liberating.
Personally, while the help in making choices has improved, there's still a long way to go before we can really tackle the paradox of choice (though, again, just knowing about the paradox of choice seems to do wonders in coping with it).

As a side note, I wonder if the video game playing generations are better at dealing with too much choice - video games are all about decisions, so I wonder if folks who grew up working on their decision making apparatus are more comfortable with being deluged by choice.
Posted by Mark on November 18, 2007 at 09:47 PM .: link :.

End of This Day's Posts

Wednesday, October 17, 2007

The Spinning Silhouette
This Spinning Silhouette optical illusion is making the rounds on the internet this week, and it's being touted as a "right brain vs left brain test." The theory goes that if you see the silhouette spinning clockwise, you're right brained, and you're left brained if you see it spinning counterclockwise.

Everytime I looked at the damn thing, it was spinning a different direction. I closed my eyes and opened them again, and it spun a different direction. Every now and again, and it would stay the same direction twice in a row, but if I looked away and looked back, it changed direction. Now, if I focus my eyes on a point below the illusion, it doesn't seem to rotate all the way around at all, instead it seems like she's moving from one side to the other, then back (i.e. changing directions every time the one leg reaches the side of the screen - and the leg always seems to be in front of the silhouette).

Of course, this is the essense of the illusion. The silhouette isn't actually spinning at all, because it's two dimensional. However, since my brain is used to living in a three dimensional world (and thus parsing three dimensional images), it's assuming that the image is also three dimensional. We're actually making lots of assumptions about the image, and that's why we can see it going one way or the other.

Eventually, after looking at the image for a while and pondering the issues, I got curious. I downloaded the animated gif and opened it up in the GIMP to see how the frames are built. I could be wrong, but I'm pretty sure this thing is either broken or it's cheating. Well, I shouldn't say that. I noticed something off on one of the frames, and I'd be real curious to know how that affects people's perception of the illusion (to me, it means the image is definitely moving counterclockwise). I'm almost positive that it's too subtle to really affect anything, but I did find it interesting. More on this, including images and commentary, below the fold. First thing's first, here's the actual spinning silhouette.

The Spinning Silhouette

Again, some of you will see it spinning in one direction, some in the other direction. Everyone seems to have a different trick for getting it to switch direction. Some say to focus on the shadow, some say to look at the ankles. Closing my eyes and reopening seems to do the trick for me. Now let's take a closer look at one of the frames. Here's frame 12:

In frame 12, the illusion is still intact

Looking at this frame, you should be able to switch back and forth, seeing the leg behind the person or in front of the person. Again, because it's a silhouette and a two dimensional image, our brain usually makes an assumption of depth, putting the leg in front or behind the body. Switching back and forth on this static image was actually a lot easier for me. Now the tricky part comes in the next frame, number 13 (obviously, the arrow was added by me):

In frame 13, there is a little gash in the leg

Now, if you look closely at the leg, you'll see a little imperfection in the silhouette. Maybe I'm wrong, but that little gash in the leg seems to imply that the leg is behind the body. If you try, you can still get yourself to see the image as having the leg in front, but then you've got this gash in the leg that just seems very out of place.

So what to make of this? First, the imperfection is subtle enough (it's on 1 frame out of 34) that everyone still seems to be able to see it rotate in both directions. Second, maybe I'm crazy, and the little gash doesn't imply what I think. Anyone have alternative explanations? Third, is that imperfection intentional? If so, why? It does not seem necessary, so I'd be curious to know if the creators knew about it, and what their intention was regarding it.

Finally, as far as the left brain versus right brain portion, I find that I don't really care, but I am interested in how the imperfection would affect this "test." This neuroscientist seems to be pretty adamant about the whole left/right thing being hogwash though:
...the notion that someone is "left-brained" or "right-brained" is absolute nonsense. All complex behaviours and cognitive functions require the integrated actions of multiple brain regions in both hemispheres of the brain. All types of information are probably processed in both the left and right hemispheres (perhaps in different ways, so that the processing carried out on one side of the brain complements, rather than substitutes, that being carried out on the other).
At the very least, the traditional left/right brain theory is a wildly oversimplified version of what's really happening. The post also goes into the way the brain "fill in the gaps" for confusing visual information, thus allowing the illusion.

Update: Strange - the image appears to be rotating MUCH faster in Firefox than in Opera or IE. I wonder how that affects perception.
Posted by Mark on October 17, 2007 at 10:42 PM .: link :.

End of This Day's Posts

Sunday, June 03, 2007

The Long Tail of Forgotten Works
I'm currently reading Chris Anderson's book The Long Tail, and he relates a story about how some books find an audience long after they've been published.
In 1988, a British mountain climber named Joe Simpson wrote a book called Touching the Void, a harrowing account of near death in the Peruvian Andes. Though reviews for the book were good, it was only a modest success, and soon was largely forgotten. Then, a decade later, a strange thing happened. Jon Krakauer wrote Into Thin Air, another book about a mountain-climbing tragedy, which became a publishing sensation. Suddenly Touching the Void started to sell again.

Booksellers began promoting it next to their Into Thin Air displays, and sales continued to rise. In early 2004, IFC Films released a docudrama of the story, to good reviews. Shortly thereafter, HarperCollins released a revised paperback, which spent fourteen weeks on the New York Times best-seller list. By mid-2004, Touching the Void was outselling Into Thin Air more than two to one.

What happened? Online word of mouth. When Into Thin Air first came out, a few readers wrote reviews on Amazon.com that pointed out the similarities with the then lesser-known Touching the Void, which they praised effusively. Other shoppers read those reviews, checked out the older book, and added it to their shopping carts. Pretty soon the online bookseller's software noted the patterns in buying behavior--"Readers who bought Into Thin Air also bought Touching the Void"--and started recommending the two as a pair. People took the suggestion, agreed wholeheartedly, wrote more rhapsodic reviews. More sales, more algorithm-fueled recommendations--and a powerful positive feedback loop kicked in.

Particularly notable is that when Krakauer's book hit shelves, Simpson's was nearly out of print. A decade ago readers of Krakauer would never even have learned about Simpson's book--and if they had, they wouldn't have been able to find it. Online booksellers changed that. By combining infinite shelf space with real-time information about buying trends and public opinion, they created the entire Touching the Void phenomenon. The result: rising demand for an obscure book.
There is something interesting going on here. I'm wondering how many great works of art are simply lost in obscurity. These days, we've got the internet and primitive tools to traverse the long tail, so it seems that a lot of obscure works find a new audience when a new, similar work is released. But what happened before the internet? How many works have simply gone out of print because they never found an audience - how many works suffered the fate Touching the Void narrowly avoided?

Of course, I have no idea (that's kinda the point), but one of the great things about the internet and the emerging infinite shelf space of online retailers is that some of these obscure works are rediscovered and new connections are made. For instance, I once came accross a blog post by Jonathon Delacour about this obscure Japanese horror film called Matango: Attack of the Mushroom People. The description of the film?
After a yacht is damaged in a storm and stranded on a deserted island, the passengers: a psychologist, his girlfriend, a wealthy businessman, a famous singer, a writer, a sailor and his skipper take refuge in a fungus covered boat. While using the mushrooms for sustenance, they find the ship's journal describing the mushrooms to be poisonous, however some members of the shipwrecked party continue to ingest the mysterious fungi transforming them into hideous fungal monsters.
Sound familiar? As Delacour notes, a reviewer on Amazon.com sure thinks so:
Was this the Inspiration for Gilligan's Island? ...and that's a serious question. It predated the premier of Gillian's Island by several years. There's a millionaire who owns a yacht that looks like the Minnow. On board is a professor, the captain, a goofy (though somewhat sinster in the film) first mate, a pretty but shy country girl named Okiko, and a singer/movie star. There are seven castaways in all. "Lovey" is replaced by another male character, a writer named Roy. The boat crashes into an island where they are castaways... Course on Gilligan's Island they didn't all turn into mutated mushrooms monsters. Rent or buy the DVD (one of my favorite films in Japanese cinema, finally getting its due...) and you tell me if Gilligan's Island isn't a complete rip-off of this film.
Several reviewers actually make the Gilligan's Island connection, and one even takes time to refute the claim that Gilligan ripped off Matango:
Actually as stated on this DVD's actor commentary Matango premiered in Japanese theaters in and around mid 1963. The Gilligan's Island first pilot (with different actors as The Professor and Ginger)was made in late 1963 thus the Japanese film does not predate Gilligan by a few years as another poster here thinks.Schwartz could have heard about a Japanese film made with seven castaways (as Hollywood and Tokoyo's Toho were in communication). But he definitely didn't see the Japanese film before he pitched gI to the networks in early 63.
So perhaps this was just a happy coincidence... A commentor on Delacour's post mentions that the movie is loosely based on a 1907 short story by William Hope Hodgson called The Voice in the Night, but while it certainly was the inspiration behind Matango, it probably didn't inspire Gilligan's Island...

I seem to have veered off track here, but it was an interesting diversion: from obscure Japanese horror film to Gilligan's Island to William Hope Hodgson... would anyone have made these connections 20 years ago? It certainly would have been possible, but I doubt it would happen as quickly or efficiently as it did on the internet.
Posted by Mark on June 03, 2007 at 08:35 PM .: link :.

End of This Day's Posts

Sunday, April 29, 2007

Again Cell Phones
About 2 years ago, I started looking around for a new cell phone. At the time, I just wanted a simple, no-frills type phone, but I kept an open mind and looked at some of the more advanced features that were becoming available. I eventually settled on a small, low-end Nokia. I instantly regretted the decision not to get a camera phone, but otherwise, the phone has performed admirably. The only other complaint I really have is that the call volume could stand to be a little louder. In any case, in the comments of one of the above linked posts, I mentioned:
I'm actually kinda surprised that cell phones aren't... better than they are now. I figure in about 2 years, my dream phone will be more attainable, so for now, I'll make do with what I got.
Well, it's been 2 years, I'm once again looking into purchasing a new phone and... I'm still surprised that cell phones aren't better than they are right now. Seriously, what the heck is going on? My priorities aren't that unusual and have only changed a little since my last foray: I want a phone that has strong battery life, good call quality (with louder call volume), good usability (i.e. button placement, menu structure, etc...), and a quality camera (at least 1.3 megapixel). There are lots of secondary features and nice-to-haves, but those are the most important things. This is apparently difficult to achieve though, and I'm distinctly underwhelmed by my options. Actually there are a lot of decent phones out there, but I think I've fallen into the classic paradox of choice trap. Here are some phones I'm considering:
  • Sony Ericsson W810i: When I bought my last phone, I remarked that the Sony Ericsson W800i seemed really interesting because it was basically knocking out 3 birds with one stone: phone, camera, and mp3 player. At the time, it was obscenely expensive and it seemed to suffer from numerous glitches. The W810i is the successor to the W800i, and by all accounts Sony Ericsson has worked through a lot of the issues to produce a pretty solid phone. I have some minor concerns about the keypad, but everything else seems in order (and the phone looks great - 2 MP with a flash) and the price tag is pretty reasonable for such a fully-featured phone. The only thing that really goes against my requirements is the "staticky call quality" that's referenced in the reviews. Also, I hate Sony. I really don't want to give them my money.
  • Motorola SLVR: I've never been a big fan of clamshell phones, so I never really cared that much about the RAZR when it came out. Then Mororola released the SLVR, which seems like a decent phone at first glance. Decent battery life (not as good as the Sony Ericsson though), reasonable sound quality, and all the standard cell phone features. The one big problem for me is that the camera looks crappy. I believe the newer models are improving the camera, so we'll see how that goes (in general, Motorola's phones don't seem to have great cameras though, even when they have decent resolutions). If they improve the camera, I'd gladly pick this over the Sony Ericsson.
  • Motorola KRZR: This is another interesting option, but once again, I'm a little turned off by the camera. It seems better than the camera on the SLVR, but still nowhere near the Sony Ericsson. There seem to be a lot of different versions of Motorola cell phones (no matter what variety), so it's a little confusing going through them all and trying to figure out which one meets your needs. I don't normally love flip phones, but I think this one's pretty good. Aside from the camera, this one appears to be a little more expensive too, which is a bummer.
  • Nokia 5300 Xpress Music: Well, this one isn't a real option just yet simply because it's not available on Cingular or Verizon. That said, it's a quality phone, and I've had good experiences with Nokia. Again, the camera seems decent but nowhere near the Sony Ericsson. The only other problem is that it seems the volume doesn't go loud enough, and that's one of my primary annoyances.
  • LG VX8600: This is the flip-phone version of the hip Chocolate phones, and it seems to have improved upon the Chocolate as well. This supposedly has one of the better cameras, but it has awful battery life.
There are some interesting phones coming. I'd love an iPhone, but I can't justify the cost. I'm interested in the rumored Microsoft and Google phones, but I doubt they'll be coming anytime soon. Of course, there are probably dozens of phones that would readily meet my needs, but they're not available in the US. I'm hardly the first person to note this, but it is quite frustrating. I understand why this is happening (the US is a small, fractured market that utilizes a variety of technologies and frequencies that are different than what Europe & Asia use. So companies naturally focus on the larger, more homogenous European & Asian markets.), but it's still annoying. I'm not sure how this will be rectified; perhaps we'll just have to wait until 4G comes along (assuming everyone adopts the same 4G).

Update: Drool. Battery life looks lame, but otherwise it's great. Not that it matters, as it ain't available yet.
Posted by Mark on April 29, 2007 at 07:39 PM .: Comments (2) | link :.

End of This Day's Posts

Wednesday, March 07, 2007

A System of Warnings
Josh Porter recently wrote about some design principles he uses. As Josh notes, people often confuse design with art. Art is a form of personal expression, while design is about use.
The designer needs someone to use (not only appreciate) what they create. Design doesn't serve its purpose without people to use it. Design helps solve human problems. The highest accolade we can bestow on a design is not that it is beautiful, as we do in Art, but that it is well-used.
I think one of the most recognized and perhaps important designs of the past twenty years or so is the Nutrition Facts label. Instantly recognizable and packed with information, yet concise and easy to read and use. It's not glamorous, but it works so well that we barely even notice it. It's great design.

While nutrition is certainly an important subject worthy of a thoughtful design, I recently stumbled upon a design project that is intriguing, difficult and important. In the desert of Southeastern New Mexico lies the Waste Isolation Pilot Plant (WIPP), an undeground radioactive waste repository. Not a pleasant place. During the planning stages of the facility, a panel of experts were tasked with designing a 10,000-year marking system. It's an intriguing design problem. The resulting report is an astounding, powerful and oddly poignant document (excerpts here, huge .pdf version of the full report here). They developed an interesting system here; note, they didn't just create signs, the entire site (from the physical layout to the words and imagery used) was designed to communicate a message across multiple levels, with a high level of redundancy. It's not just a warning, it's a system of interconnected and reinforced warnings. The authors also attempted to anticipate a variety of potential attacks as well. What is the message they wanted to convey? Here's a brief summary:
  • This place is a message... and part of a system of messages... pay attention to it!
  • Sending this message was important to us. We considered ourselves to be a powerful culture.
  • This place is not a place of honor... no highly esteemed deed is commemorated here... nothing valued is here.
  • What is here is dangerous and repulsive to us. This message is a warning about danger.
  • The danger is in a particular location... it increases toward a center... the center of danger is here... of a particular size and shape, and below us.
  • The danger is still present, in your time, as it was in ours.
  • The danger is to the body, and it can kill.
  • The form of the danger is an emanation of energy.
  • The danger is unleashed only if you substantially disturb this place physically. This place is best shunned and left uninhabited.
  • All physical site interventions and markings must be understood as communicating a message. It is not enough to know that this is a place of importance and danger...you must know that the place itself is a message, that it contains messages, and is part of a system of messages, and is a system with redundance.
As James Grimmelmann notes, this is "frightening, apocalyptic poetry." I find the third bullet to be particularly evocative. The assumptions the authors had to make in working on this design are interesting to contemplate. They're assuming that the audience for this design will be significantly different, perhaps not even human (in any case, the assumption is that something bad has happened and we're no longer around). Again, this is an intriguing design problem. I think they've done a pretty good job thinking about the problem, even if some of their more exotic designs didn't make it into the final system.
Posted by Mark on March 07, 2007 at 08:38 PM .: Comments (0) | link :.

End of This Day's Posts

Wednesday, February 21, 2007

Link Dump
Various links for your enjoyment:
  • The Order of the Science Scouts of Exemplary Repute and Above Average Physique: Like the Boy Scouts, but for Scientists. Aside from the goofy name, they've got an ingenious and hilarious list of badges, including: The "my degree inadvertantly makes me competent in fixing household appliances" badge, The "I've touched human internal organs with my own hands" badge, The "has frozen stuff just to see what happens" badge (oh come one, who hasn't done that?), The "I bet I know more computer languages than you, and I'm not afraid to talk about it" badge (well, I used to know a bunch), and of course, The "dodger of monkey shit" badge. ("One of our self explanatory badges."). Sadly, I qualify for less of these than I'd like. Of course, I'm not a scientist, but still. I'm borderline on many though (for instance, the "I blog about science" badge requires that I maintain a blog where at least a quarter of the material is about science - I certainly blog about technology a lot, but explicitely science? Debateable, I guess.)
  • Dr. Ashen and Gizmodo Reviews The Gamespower 50 (YouTube): It's a funny review of a crappy portable video game device, just watch it. The games on this thing are so bad (there's actually one called "Grass Cutter," which is exactly what you think it is - a game where you mow the lawn).
  • Count Chocula Vandalism on Wikipedia: Some guy came up with an absurdly comprehensive history for Count Chocula:
    Ernst Choukula was born the third child to Estonian landowers in the late autumn of 1873. His parents, Ivan and Brushken Choukula, were well-established traders of Baltic grain who-- by the early twentieth century--had established a monopolistic hold on the export markets of Lithuania, Latvia and southern Finland. A clever child, Ernst advanced quickly through secondary schooling and, at the age of nineteen, was managing one of six Talinn-area farms, along with his father, and older brother, Grinsh. By twenty-four, he appeared in his first "barrelled cereal" endorsement, as the Choukula family debuted "Ernst Choukula's Golden Wheat Muesli", a packaged mix that was intended for horses, mules, and the hospital ridden. Belarussian immigrant silo-tenders started cutting the product with vodka, creating a crude mush-paste they called "gruhll" or "gruell," and would eat the concoction each morning before work.
    It goes on like that for a while. That particular edit has been removed from the real article, but there appears to actually be quite a debate on the Talk page as to whether or not to mention it in the official article.
  • The Psychology of Security by Bruce Schneier: A long draft of an article that delves into psychological reasons we make the security tradeoffs that we do. Interesting stuff.
  • The Sagan Diary by John Scalzi (Audio Book): I've become a great fan of Scalzi's fiction, and his latest work is available here as audio (a book is available too, but it appears to be a limited run). Since the book is essentially the diary of a woman, he got various female authors and friends to read a chapter. This actually makes for somewhat uneven listening, as some are great and others aren't as great. Now that I think about it, this book probably won't make sense if you haven't read Old Man's War and/or The Ghost Brigades. However, they're both wonderful books of the military scifi school (maybe I'll probably write a blog post or two about them in the near future).
Posted by Mark on February 21, 2007 at 08:16 PM .: link :.

End of This Day's Posts

Wednesday, February 14, 2007

Intellectual Property, Copyright and DRM
Roy over at 79Soul has started a series of posts dealing with Intellectual Property. His first post sets the stage with an overview of the situation, and he begins to explore some of the issues, starting with the definition of theft. I'm going to cover some of the same ground in this post, and then some other things which I assume Roy will cover in his later posts.

I think most people have an intuitive understanding of what intellectual property is, but it might be useful to start with a brief definition. Perhaps a good place to start would be Article 1, Section 8 of the U.S. Constitution:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;
I started with this for a number of reasons. First, because I live in the U.S. and most of what follows deals with U.S. IP law. Second, because it's actually a somewhat controversial stance. The fact that IP is only secured for "limited times" is the key. In England, for example, an author does not merely hold a copyright on their work, they have a Moral Right.
The moral right of the author is considered to be -- according to the Berne convention -- an inalienable human right. This is the same serious meaning of "inalienable" the Declaration of Independence uses: not only can't these rights be forcibly stripped from you, you can't even give them away. You can't sell yourself into slavery; and neither can you (in Britain) give the right to be called the author of your writings to someone else.
The U.S. is different. It doesn't grant an inalienable moral right of ownership; instead, it allows copyright. In other words, in the U.S., such works are considered property (i.e. it can be sold, traded, bartered, or given away). This represents a fundamental distinction that needs to be made: some systems emphasize individual rights and rewards, and other systems are more limited. When put that way, the U.S. system sounds pretty awful, except that it was designed for something different: our system was built to advance science and the "useful arts." The U.S. system still rewards creators, but only as a means to an end. Copyright is granted so that there is an incentive to create. However, such protections are only granted for "limited Times." This is because when a copyright is eternal, the system stagnates as protected peoples stifle competition (this need not be malicious). Copyright is thus limited so that when a work is no longer protected, it becomes freely available for everyone to use and to build upon. This is known as the public domain.

The end goal here is the advancement of society, and both protection and expiration are necessary parts of the mix. The balance between the two is important, and as Roy notes, one of the things that appears to have upset the balance is technology. This, of course, extends as far back as the printing press, records, cassettes, VHS, and other similar technologies, but more recently, a convergence between new compression techniques and increasing bandwidth of the internet created an issue. Most new recording technologies were greeted with concern, but physical limitations and costs generally put a cap on the amount of damage that could be done. With computers and large networks like the internet, such limitations became almost negligible. Digital copies of protected works became easy to copy and distribute on a very large scale.

The first major issue came up as a result of Napster, a peer-to-peer music sharing service that essentially promoted widespread copyright infringement. Lawsuits followed, and the original Napster service was shut down, only to be replaced by numerous decentralized peer-to-peer systems and darknets. This meant that no single entity could be sued for the copyright infringement that occurred on the network, but it resulted in a number of (probably ill-advised) lawsuits against regular folks (the anonymity of internet technology and state of recordkeeping being what it is, this sometimes leads to hilarious cases like when the RIAA sued a 79 year old guy who doesn't even own a computer or know how to operate one).

Roy discusses the various arguments for or against this sort of file sharing, noting that the essential difference of opinion is the definition of the word "theft." For my part, I think it's pretty obvious that downloading something for free that you'd normally have to pay for is morally wrong. However, I can see some grey area. A few months ago, I pre-ordered Tool's most recent album, 10,000 Days from Amazon. A friend who already had the album sent me a copy over the internet before I had actually recieved my copy of the CD. Does this count as theft? I would say no.

The concept of borrowing a Book, CD or DVD also seems pretty harmless to me, and I don't have a moral problem with borrowing an electronic copy, then deleting it afterwords (or purchasing it, if I liked it enough), though I can see how such a practice represents a bit of a slippery slope and wouldn't hold up in an honest debate (nor should it). It's too easy to abuse such an argument, or to apply it in retrospect. I suppose there are arguments to be made with respect to making distinctions between benefits and harms, but I generally find those arguments unpersuasive (though perhaps interesting to consider).

There are some other issues that need to be discussed as well. The concept of Fair Use allows limited use of copyrighted material without requiring permission from the rights holders. For example, including a screenshot of a film in a movie review. You're also allowed to parody copyrighted works, and in some instances make complete copies of a copyrighted work. There are rules pertaining to how much of the copyrighted work can be used and in what circumstances, but this is not the venue for such details. The point is that copyright is not absolute and consumers have rights as well.

Another topic that must be addressed is Digital Rights Management (DRM). This refers to a range of technologies used to combat digital copying of protected material. The goal of DRM is to use technology to automatically limit the abilities of a consumer who has purchased digital media. In some cases, this means that you won't be able to play an optical disc on a certain device, in others it means you can only use the media a certain number of times (among other restrictions).

To be blunt, DRM sucks. For the most part, it benefits no one. It's confusing, it basically amounts to treating legitimate customers like criminals while only barely (if that much) slowing down the piracy it purports to be thwarting, and it's lead to numerous disasters and unintended consequences. Essential reading on this subject is this talk given to Microsoft by Cory Doctorow. It's a long but well written and straightforward read that I can't summarize briefly (please read the whole thing). Some details of his argument may be debateable, but as a whole, I find it quite compelling. Put simply, DRM doesn't work and it's bad for artists, businesses, and society as a whole.

Now, the IP industries that are pushing DRM are not that stupid. They know DRM is a fundamentally absurd proposition: the whole point of selling IP media is so that people can consume it. You can't make a system that will prevent people from doing so, as the whole point of having the media in the first place is so that people can use it. The only way to perfectly secure a piece of digital media is to make it unusable (i.e. the only perfectly secure system is a perfectly useless one). That's why DRM systems are broken so quickly. It's not that the programmers are necessarily bad, it's that the entire concept is fundamentally flawed. Again, the IP industries know this, which is why they pushed the Digital Millennium Copyright Act (DMCA). As with most laws, the DMCA is a complex beast, but what it boils down to is that no one is allowed to circumvent measures taken to protect copyright. Thus, even though the copy protection on DVDs is obscenely easy to bypass, it is illegal to do so. In theory, this might be fine. In practice, this law has extended far beyond what I'd consider reasonable and has also been heavily abused. For instance, some software companies have attempted to use the DMCA to prevent security researchers from exposing bugs in their software. The law is sometimes used to silence critics by threatening them with a lawsuit, even though no copright infringement was committed. The Chilling Effects project seems to be a good source for information regarding the DMCA and it's various effects.

DRM combined with the DMCA can be stifling. A good example of how awful DRM is, and how DMCA can affect the situation is the Sony Rootkit Debacle. Boing Boing has a ridiculously comprehensive timeline of the entire fiasco. In short, Sony put DRM on certain CDs. The general idea was to prevent people from putting the CDs in their computer and ripping them to MP3s. To accomplish this, Sony surreptitiously installed software on customer's computers (without their knowledge). A security researcher happened to notice this, and in researching the matter found that the Sony DRM had installed a rootkit that made the computer vulnerable to various attacks. Rootkits are black-hat cracker tools used to disguise the workings of their malicious software. Attempting to remove the rootkit broke the windows installation. Sony reacted slowly and poorly, releasing a service pack that supposedly removed the rootkit, but which actually opened up new security vulnerabilities. And it didn't end there. Reading through the timeline is astounding (as a result, I tend to shy away from Sony these days). Though I don't believe he was called on it, the security researcher who discovered these vulnerabilities was technically breaking the law, because the rootkit was intended to protect copyright.

A few months ago, my windows computer died and I decided to give linux a try. I wanted to see if I could get linux to do everything I needed it to do. As it turns out, I could, but not legally. Watching DVDs on linux is technically illegal, because I'm circumventing the copy protection on DVDs. Similar issues exist for other media formats. The details are complex, but in the end, it turns out that I'm not legally able to watch my legitimately purchased DVDs on my computer (I have since purchased a new computer that has an approved player installed). Similarly, if I were to purchase a song from the iTunes Music Store, it comes in a DRMed format. If I want to use that format on a portable device (let's say my phone, which doesn't support Apple's DRM format), I'd have to convert it to a format that my portable device could understand, which would be illegal.

Which brings me to my next point, which is that DRM isn't really about protecting copyright. I've already established that it doesn't really accomplish that goal (and indeed, even works against many of the reasons copyright was put into place), so why is it still being pushed? One can only really speculate, but I'll bet that part of the issue has to do with IP owners wanting to "undercut fair use and then create new revenue streams where there were previously none." To continue an earlier example, if I buy a song from the iTunes music store and I want to put it on my non-Apple phone (not that I don't want one of those), the music industry would just love it if I were forced to buy the song again, in a format that is readable by my phone. Of course, that format would be incompatible with other devices, so I'd have to purchase the song again if I wanted to listen to it on those devices. When put in those terms, it's pretty easy to see why IP owners like DRM, and given the general person's reaction to such a scheme, it's also easy to see why IP owners are always careful to couch the debate in terms of piracy. This won't last forever, but it could be a bumpy ride.

Interestingly enough, distributers of digital media like Apple and Yahoo have recently come out against DRM. For the most part, these are just symbolic gestures. Cynics will look at Steve Jobs' Thoughts on Music and say that he's just passing the buck. He knows customers don't like or understand DRM, so he's just making a calculated PR move by blaming it on the music industry. Personally, I can see that, but I also think it's a very good thing. I find it encouraging that other distributers are following suit, and I also hope and believe this will lead to better things. Apple has proven that there is a large market for legally purchased music files on the internet, and other companies have even shown that selling DRM-free files yields higher sales. Indeed, the emusic service sells high quality, variable bit rate MP3 files without DRM, and it has established emusic as the #2 retailer of downloadable music behind the iTunes Music Store. Incidentally, this was not done for pure ideological reasons - it just made busines sense. As yet, these pronouncements are only symbolic, but now that online media distributers have established themselves as legitimate businesses, they have ammunition with which to challenge the IP holders. This won't happen overnight, but I think the process has begun.

Last year, I purchased a computer game called Galactic Civilizations II (and posted about it several times). This game was notable to me (in addition to the fact that it's a great game) in that it was the only game I'd purchased in years that featured no CD copy protection (i.e. DRM). As a result, when I bought a new computer, I experienced none of the usual fumbling for 16 digit CD Keys that I normally experience when trying to reinstall a game. Brad Wardell, the owner of the company that made the game, explained his thoughts on copy protection on his blog a while back:
I don't want to make it out that I'm some sort of kumbaya guy. Piracy is a problem and it does cost sales. I just don't think it's as big of a problem as the game industry thinks it is. I also don't think inconveniencing customers is the solution.
For him, it's not that piracy isn't an issue, it's that it's not worth imposing draconian copy protection measures that infuriate customers. The game sold much better than expected. I doubt this was because they didn't use DRM, but I can guarantee one thing: People don't buy games because they want DRM. However, this shows that you don't need DRM to make a successful game.

The future isn't all bright, though. Peter Gutmann's excellent Cost Analysis of Windows Vista Content Protection provides a good example of how things could get considerably worse:
Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called "premium content", typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it's not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server).
This is infuriating. In case you can't tell, I've never liked DRM, but at least it could be avoided. I generally take articles like the one I'm referencing with a grain of salt, but if true, it means that the DRM in Vista is so oppressive that it will raise the price of hardware… And since Microsoft commands such a huge share of the market, hardware manufacturers have to comply, even though a some people (linux users, Mac users) don't need the draconian hardware requirements. This is absurd. Microsoft should have enough clout to stand up to the media giants, there's no reason the DRM in Vista has to be so invasive (or even exist at all). As Gutmann speculates in his cost analysis, some of the potential effects of this are particularly egregious, to the point where I can't see consumers standing for it.

My previous post dealt with Web 2.0, and I posted a YouTube video that summarized how changing technology is going to force us to rethink a few things: copyright, authorship, identity, ethics, aesthetics, rhetorics, governance, privacy, commerce, love, family, ourselves. All of these are true. Earlier, I wrote that the purpose of copyright was to benefit society, and that protection and expiration were both essential. The balance between protection and expiration has been upset by technology. We need to rethink that balance. Indeed, many people smarter than I already have. The internet is replete with examples of people who have profited off of giving things away for free. Creative Commons allows you to share your content so that others can reuse and remix your content, but I don't think it has been adopted to the extent that it should be.

To some people, reusing or remixing music, for example, is not a good thing. This is certainly worthy of a debate, and it is a discussion that needs to happen. Personally, I don't mind it. For an example of why, watch this video detailing the history of the Amen Break. There are amazing things that can happen as a result of sharing, reusing and remixing, and that's only a single example. The current copyright environment seems to stifle such creativity, not the least of which because copyright lasts so long (currently the life of the author plus 70 years). In a world where technology has enabled an entire generation to accellerate the creation and consumption of media, it seems foolish to lock up so much material for what could easily be over a century. Despite all that I've written, I have to admit that I don't have a definitive answer. I'm sure I can come up with something that would work for me, but this is larger than me. We all need to rethink this, and many other things. Maybe that Web 2.0 thing can help.

Update: This post has mutated into a monster. Not only is it extremely long, but I reference several other long, detailed documents and even somewhere around 20-25 minutes of video. It's a large subject, and I'm certainly no expert. Also, I generally like to take a little more time when posting something this large, but I figured getting a draft out there would be better than nothing. Updates may be made...

Update 2.15.07: Made some minor copy edits, and added a link to an Ars Technica article that I forgot to add yesterday.
Posted by Mark on February 14, 2007 at 11:44 PM .: link :.

End of This Day's Posts

Wednesday, January 10, 2007

iPhoneA couple of years ago, I was in the market for a new phone. After looking around at all the options and features, I ended up settling on a relatively "low-end" phone that was good for calls and SMS and that's about it. It was small, simple, and to the point, and while it has served me well, I have kinda regretted not getting a camera in the phone (this is the paradox of choice in action). I considered the camera phone, as well as phones that played music (three birds with one stone!), but it struck me that feature packed devices like that simply weren't ready yet. They were expensive, clunky, and the interface looked awful.

Enter Apple's new iPhone. Put simply, they've done a phenominal job with this phone. I'm impressed. Watch the keynote presentation here. Some highlights that I found interesting:
  • Just to mention some of the typical stuff: it's got all the features of a video iPod, it's got a phone, it's got a camera, and it's got the internet. It has an iPod connector, so you can hook it up to your computer and sync all the appropriate info (music, contacts, video, etc...) through iTunes (i.e. an application that everyone is already familiar with because they use it with their iPod.) It runs Mac OSX (presumably a streamlined version) and has a browser, email app, and widgets. Battery life seems very reasonable.
  • Ok enough of the functionality. The functionality is mostly, well, normal. There are smart phones that do all of the above. Indeed, one of the things that worries me about this phone is that by cramming so much functionality into this new phone, Apple will also be muddying the interface... but the interface is what's innovative about this phone. This is what the other smart phones don't do. In short, the interface is a touch screen (no physical keyboard, and no stylus; it takes up the majority of the surface area of a side of the phone and you use your fingers to do stuff. Yes, I said fingers, as in multiple. More later.) This allows them to tailor the interface to the application currently in use. Current smart phones all have physical controls that must stay fixed (i.e. a mini qwerty keyboard, and a set of directional buttons, etc...) and which are there whether you need them for what you're doing or not. By using a touch screen, Apple has solved that problem rather neatly (Those of you familiar with this blog know what's coming, but it'll be a moment).
  • Scrolling looks fun. Go and watch the demo. It looks neat and, more importantly, it appears to be consistent between all the applications (i.e. scrolling your music library, scrolling through your contacts, scrolling down a web page, etc...). Other "multi-touch" operations also look neat, such as the ability to zoom into web page by squeezing your fingers on the desired area (iPhone loads the actual page, not the WAP version, and allows you to zoom in to read what you want - another smart phone problem solved (yes, yes, it's coming, don't worry)). The important thing about the touch interface is that it is extremely intuitive. You don't need to learn that much in order to use this phone, and the touch screen interface.
  • The phone does a few interesting new things. It has a feature they're calling "visual voicemail" which lets you see all of your voicemail, then select which one you want to listen to first (a great feature). It also makes conference calls a snap, too. This is honestly something I can't see using that much, but the interface to do it is better than any other conference call interface I've seen, and it's contextual in that you don't have to deal with it until you've got two people on the phone.
  • It's gyroscopic, dude. It has motion sensors that detect the phone's orientation. If you're looking at a picture, and you turn the phone, the picture will turn with you (and if it's a landscape picture, it'll fill more of the screen too). It senses the lighting and adjusts the screen's display to compensate for the environment (saves power, provides better display). When you put the phone by your ear to take a call, it senses that, and deactivates the touchscreen, saving power and avoiding unwanted "touches" on the screen (you don't want your ear to hang up, after all). Another problem solved (wait for it). Unfortunately, the iPhone does not also feature Wiimote functionality (wiiPhone anyone?)
  • Upgradeable Interface: One of the most important things that having a touch screen interface allows Apple to do is provide updates to installed software and even new applications (given that it's running a version of OS X, this is probably a given). Let's say that the interface for browsing contacts is a little off, or the keyboard is spaced wrong. With a physical keyboard on a smart phone, you can't fix that problem without redesigning the whole thing and making the customer purchase a new piece of hardware. The iPhone can just roll out an update.
  • Apple could put Blackberry out of business with this thing, provided that the functionality is there (it appears that it is for Yahoo mail, but will it work with my company? I can't tell just yet.). Blackberries always seemed like a fully featured kludge to me. The iPhone is incredibly elegant in comparison (not that it isn't elegant all by itself). This would also mitigate the whole high price issue: companies might pay for this thing if it works as well as it seems, and people are always more willing to spend their companies money than their own.
Ok, you know what's coming. Human beings don't solve problems. They trade one set of problems for another, in the hopes that the new are better than the old. Despite the fact that I haven't actually used the iPhone, what are some potential issues?
  • The touchscreen: Like the iPod's clickwheel, the iPhone's greatest strength could prove to be it's greatest weakness. Touch screens have been in use for years and have become pretty well understood and revised... but they can also be imprecise and, well, touchy. When watching the demo, Steve didn't seem to be having any problem executing various options, but I'm not sure how well the device will be able to distinguish between "I want to scroll" and "I want to select" (unless selecting was a double-tap, but I don't think it was). Designing a new touch screen input interface is a tricky human factors problem, and I'm willing to be it will take a little while to be perfected. Like the scrollwheel, I can see it being easy to overshoot or select the wrong item. I could certainly be wrong, and I look forward to fiddling with it at the local Mac store to see just how responsive it really is (it's hard to comment on something you've never used). However, I'm betting that (again like the scrollwheel) the touchscreen will be a net positive experience.
  • Durability: Steven Den Beste hits (scroll down) on what I think may be the biggest problem with the touch screen:
    I have some serious concerns about long term reliability of the touch panel. When it's riding inside a woman's purse, for instance, how long before the touch panel gets wrecked? Perhaps there's a soft carrying case for it -- but a lot of people will toss that, and carry the phone bare. Nothing protects that panel, and it covers one of the two largest faces on the unit. There are a thousand environmental hazards which could wreck it: things dropped onto it, or it being dropped onto other things. And if the touch panel goes bad, the rest of the unit is unusable.
    Indeed. iPods are notorious for getting scratched up, especially the screens. How will that impact the display? How will it impact the touch screen?
  • Two hands? It looks like you need to use two hands to do a lot of these touch screen operations (one to hold, the other to gesture). Also, when writing an email, a little qwerty keyboard appears on the touch screen... which is nice, but which also might be difficult to use with one hand or without looking (physical keyboards allow you to figure out what key you're on by touch, and also have little nubs - home keys - which don't translate to the touch screen). I don't know how much of an issue this will be, but it will affect some people (I know someone who will type emails on their Blackberry with one hand, while driving. This is an extreme case, to be sure, but it doesn't seem possible with the touch screen).
  • Zooming: The zooming feature in web browsing is neat, but the page they used in the demo (the NY Times homepage) has 5 columns, which seems ideal for zooming. How will other pages render? Will zooming be as useful? The glimpses at this functionality aren't enough to tell how well it will handle the web... (Google Maps looked great though)
  • Does it do too much? This phone looks amazing, but it's price tag is prohibitive for me, especially since I probably won't use a significant portion of the functionality. I love that it does video, and while the 3.5" screen is bigger than my iPod's screen, I have to admit that I've never used the iPod video to watch something (maybe if I travelled more...) Brian Tiemann notes:
    If it weren't for the phone, I would buy this in a heartbeat. As it is, I wish (as does Damien Del Russo) that there were a way to buy it without the Cingular plan, so you could just use it as an iPod with wireless web browsing and e-mail and the like.
    Again, there is a worry that a device that tries to do everything for everyone will end up being mediocre at everything. However, I think Apple has made a very admirable attempt, and the touch screen concept really does cut down on this by allowing applications their own UIs and also allowing updates to those UIs if it becomes necessary. They've done as good a job as I think is possible at this time.
  • Battery Life: This goes along with the "does it do too much" point. I mentioned above that the battery life seems decent, and it does. However, with a device that does this much, I have a feeling that the 5 hours of use they claim will still feel a little short, especially when you're using all that stuff. This is one of the reasons I never seriously considered getting a music/camera/phone a while back: I don't want to run out my batteries playing music, then not be able to make an important call. This is a problem for mobile devices in general, and battery technology doesn't seem to be advancing as rapidly as everything else.
  • Monopoly: This phone will only further cement iTunes' dominant position in the marketplace. Is this a good thing or a bad thing? I go back and forth. Sometimes Apple seems every bit as evil as Microsoft, but then, they also seem a lot more competant too. The Zune looks decent, but it's completely overshadowed by this. We could have a worse monopoly, I guess, but I don't like stuff like DRM (which is reasonable, yes, but still not desirable except insofar as it calms down content owners) and proprietary formats that Apple won't license. Will third parties be able to develop apps for the iPhone? It could certainly be worse, but I'm a little wary.
All in all, it's quite impressive. Most of the potential issues don't seem insurmountable, and I think Apple has a hit on their hands. It should also be interesting to see if other cell phone makers respond in any way. The cell phone market is gigantic (apparently nearly a billion cell phones were sold last year), and it seems like a lot of the best phones are only available overseas. Will we start to see better phones at a cheaper price? Unfortunately, I don't think I'll be getting an iPhone anytime soon, though I will keep a close eye on it. Once they work out the bugs and the price comes down, I'll definitely be tempted.

Updates: Brian Tiemann has further thoughts. Kevin Murphy has some thoughts as well. Ars Technica also notes some issues with the iPhone, and has some other good commentary (actually, just read their Infiinite Loop journal). I think the biggest issue I forgot to mention is that the iPhone is exclusive to Cingular (and you have to get a 2 year plan at that).
Posted by Mark on January 10, 2007 at 12:08 AM .: Comments (4) | link :.

End of This Day's Posts

Sunday, November 19, 2006

Link Dump
Time is short this week, so a few quick links:
  • The 1,000 Greatest Films: Aggregated from 1,193 individual critics' and filmmakers' top-ten lists. They've got all sorts of different ways to look at the numbers, including a way to keep track of which ones you have seen. As you might expect, the list is diverse and somewhat contentious, with lots of foriegn films and some very questionable choices. There are tons of films I've never even heard of. The list is somewhat skewed towards older films, as they use some older lists (some of the lists used are as old as 1952), but then, that's still to be expected. Older films tend to get credit for their importance, and not as much because of their entertainment value today (I'm horribly understating this issue, which could probably use a blog entry of its own). As an aside, the list sometimes reads like the Criterion Collection catalog, which is pretty funny. I used the listkeeper site (which is pretty neat and might help make these type of memes a little easier to deal with), and I've apparently seen somewhere around 16% of the list. Given the breadth of the films covered in the list, I think that's pretty impressive (though I'll probably never get past 30%).
  • Shuttle Launch Seen From ISS: Photos of a Space Shuttle launch as seen from the International Space Station. Neato.
  • A Million Random Digits with 100,000 Normal Deviates: Ok, so this is a book comprised solely of a bunch of random numbers, and that's it. Nothing funny or entertaining there, except the Amazon reviewers are having a field day with it. My favorite review:
    The book is a promising reference concept, but the execution is somewhat sloppy. Whatever algorithm they used was not fully tested. The bulk of each page seems random enough. However at the lower left and lower right of alternate pages, the number is found to increment directly.
    Ahhh, geek humor. [via Schneier]
  • BuzzFeed: A new aggregator that features "movies, music, fashion, ideas, technology, and culture" that are generating buzz (in the form of news stories and blog posts, etc...). It's an interesting idea as it's not really a breaking news site, but it seems to have it's finger on the pulse of what folks are talking about (on the homepage now are sections on the Wii, PS3, Borat, and (of course Snoop Dogg's new line of pet clothing). It's not like Digg or Reddit, and thus it doesn't suffer from a lot of their issues (unless they branch out into politics and religion). I'm sure some people will try to game the system, but it seems inherently more secure against such abuse.
That's all for now.

Update: This Lists of Bests website is neat. It remembers what movies you've seen, and applies them to other lists. For example, without even going through the AFI top 100, I know that I've seen at least 41% of the list (because of all the stuff I noted when going through the top 1000). You can also compare yourself with other people on the site, and invite others to do so as well. Cool stuff.
Posted by Mark on November 19, 2006 at 10:59 PM .: Comments (2) | link :.

End of This Day's Posts

Sunday, September 17, 2006

Magic Design
A few weeks ago, I wrote about magic and how subconscious problem solving can sometimes seem magical:
When confronted with a particularly daunting problem, I'll work on it very intensely for a while. However, I find that it's best to stop after a bit and let the problem percolate in the back of my mind while I do completely unrelated things. Sometimes, the answer will just come to me, often at the strangest times. Occasionally, this entire process will happen without my intending it, but sometimes I'm deliberately trying to harness this subconscious problem solving ability. And I don't think I'm doing anything special here; I think everyone has these sort of Eureka! moments from time to time. ...

Once I noticed this, I began seeing similar patterns throughout my life and even history.
And indeed, Jason Kottke recently posted about how design works, referencing a couple of other designers, including Michael Bierut of Design Observer, who describes his process like this:
When I do a design project, I begin by listening carefully to you as you talk about your problem and read whatever background material I can find that relates to the issues you face. If you’re lucky, I have also accidentally acquired some firsthand experience with your situation. Somewhere along the way an idea for the design pops into my head from out of the blue. I can’t really explain that part; it’s like magic. Sometimes it even happens before you have a chance to tell me that much about your problem!
[emphasis mine] It is like magic, but as Bierut notes, this sort of thing is becoming more important as we move from an industrial economy to an information economy. He references a book about managing artists:
At the outset, the writers acknowledge that the nature of work is changing in the 21st century, characterizing it as "a shift from an industrial economy to an information economy, from physical work to knowledge work." In trying to understand how this new kind of work can be managed, they propose a model based not on industrial production, but on the collaborative arts, specifically theater.

... They are careful to identify the defining characteristics of this kind of work: allowing solutions to emerge in a process of iteration, rather than trying to get everything right the first time; accepting the lack of control in the process, and letting the improvisation engendered by uncertainty help drive the process; and creating a work environment that sets clear enough limits that people can play securely within them.
This is very interesting and dovetails nicely with several topics covered on this blog. Harnessing self-organizing forces to produce emergent results seems to be rising in importance significantly as we proceed towards an information based economy. As noted, collaboration is key. Older business models seem to focus on a more brute force way of solving problems, but as we proceed we need to find better and faster ways to collaborate. The internet, with it's hyperlinked structure and massive data stores, has been struggling with a data analysis problem since its inception. Only recently have we really begun to figure out ways to harness the collective intelligence of the internet and its users, but even now, we're only scraping the tip of the iceberg. Collaborative projects like Wikipedia or wisdom-of-crowds aggregators like Digg or Reddit represent an interesting step in the right direction. The challenge here is that we're not facing the problems directly anmore. If you want to create a comprehensive encyclopedia, you can hire a bunch of people to research, write, and edit entries. Wikipedia tried something different. They didn't explicitely create an encyclopedia, they created (or, at least, they deployed) a system that made it easy for large amount of people to collaborate on a large amount of topics. The encyclopedia is an emergent result of that collaboration. They sidestepped the problem, and as a result, they have a much larger and dynamic information resource.

None of those examples are perfect, of course, but the more I think about it, the more I think that their imperfection is what makes them work. As noted above, you're probably much better off releasing a site that is imperfect and iterating, making changes and learning from your mistakes as you go. When dealing with these complex problems, you're not going to design the perfect system all at once. I realize that I keep saying we need better information aggregation and analysis tools, and that we have these tools, but they leave something to be desired. The point of these systems, though, is that they get better with time. Many older information analysis systems break when you increase the workload quickly. They don't scale well. These newer systems only really work well once they have high participation rates and large amounts of data.

It remains to be seen whether or not these systems can actually handle that much data (and participation), but like I said, they're a good start and they're getting better with time.
Posted by Mark on September 17, 2006 at 08:01 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, September 10, 2006

Time is short this week, so it's time for Yet Another Link Dump (YALD!):
  • Who Writes Wikipedia? An interesting investigation of one of the controversial aspects of Wikipedia. Some contend that the authors are a small but dedicated bunch, others claim that authorship is large and diverse (meaning that the resulting encyclopedia is self-organizing and emergent). Aaron Swartz decided to look into it:
    When you put it all together, the story become clear: an outsider makes one edit to add a chunk of information, then insiders make several edits tweaking and reformatting it. In addition, insiders rack up thousands of edits doing things like changing the name of a category across the entire site -- the kind of thing only insiders deeply care about. As a result, insiders account for the vast majority of the edits. But it's the outsiders who provide nearly all of the content.

    And when you think about it, this makes perfect sense. Writing an encyclopedia is hard. To do anywhere near a decent job, you have to know a great deal of information about an incredibly wide variety of subjects. Writing so much text is difficult, but doing all the background research seems impossible.

    On the other hand, everyone has a bunch of obscure things that, for one reason or another, they've come to know well. So they share them, clicking the edit link and adding a paragraph or two to Wikipedia. At the same time, a small number of people have become particularly involved in Wikipedia itself, learning its policies and special syntax, and spending their time tweaking the contributions of everybody else.
    Depending on how you measure it, many perspectives are correct, but the important thing here is that both types of people (outsiders and insiders) are necessary to make the system work. Via James Grimmelman, who has also written an interesting post on Wikipedia Fallacies that's worth reading.
  • Cyber Cinema, 1981-2001: An absurdly comprehensive series of articles chronicling cyberpunk cinema. This guy appears to know his stuff, and chooses both obvious and not-so-obvious films to review. For example, he refers to Batman as "a fine example of distilled Cyberpunk." I probably wouldn't have pegged Batman as cyberpunk, but he makes a pretty good case for it... Anyway, I haven't read all of his choices (20 movies, 1 for each year), but it's pretty interesting stuff. [via Metaphlog]
  • The 3-Day Novel Contest: Well, it's too late to partake now, but this is an interesting contest where entrants all submit a novel written in 3 days. The contest is usually held over labor day weekend (allowing everyone to make the most of their long holiday weekend). The Survival Guide is worth reading even if you don't intend on taking part. Some excerpts: On the attitude required for such an endeavor:
    Perhaps the most important part of attitude when approaching a 3-Day Novel Contest is that of humility. It is not, as one might understandably and mistakenly expect, aggression or verve or toughness or (as it has been known) a sheer murderous intent to complete a 3-Day Novel (of this latter approach it is almost always the entrant who dies and not the contest). Let’s face it, what you are about to do, really, defies reality for most people. As when in foreign lands, a slightly submissive, respectful attitude generally fares better for the traveller than a self-defeating mode of overbearance. As one rather pompous contestant confessed after completing the contest: “I’ve been to Hell, and ended up writing about it.”
    On outlines and spontaneity:
    Those without a plan, more often than not, find themselves floundering upon the turbulent, unforgiving seas of forced spontaneous creativity. An outline can be quite detailed and, as veterans of the contest will also tell you, the chances of sticking to the outline once things get rolling are about 1,000 to 1. But getting started is often a major hurdle and an outline can be invaluable as an initiator.
    Two things that interest me about this: plans that fall apart, but must be made anyway (which I have written about before) and the idea that just getting started is important (which is something I'll probably write about sometime, assuming I haven't already done so and forgot).

    On eating:
    Keep it simple, and fast. Wieners (straight from the package—protein taken care of). Bananas and other fruit (vitamin C, potassium, etc.). Keep cooking to a minimum. Pizzas, Chinese—food to go. Forget balance, this is not a “spa”, there are no “healing days”. This is a competition; a crucible; a hill of sand. Climb! Climb!
    Lots of other fun stuff there. Also, who says you need to do it on Labor day weekend. Why not take a day off and try it out? [via Web Petals, who has some other interesting quotes from the contest]
That's all for now. Sorry for just throwing links at you all the time, but I've entered what's known as Wedding Season. Several weddings over the next few weekends, only one of which is in this area. This week's was in Rhode Island, so I had a wonderful 12-13 hours of driving to contend with (not to mention R.I.'s wonderful road system - apparently they don't think signs are needed). Thank goodness for podcasts - specifically Filmspotting, Mastercritic, and the Preston and Steve Show (who are professional broadcasters, but put their entire show (2+ hours) up, commercial free, every day).

Shockingly, it seems that I only needed to use two channels on my Monster FM Transmitter and both of those channels are the ones I use around Philly. Despite this, I've not been too happy with my FM transmitter thingy. It get's the job done, I guess, but I find myself consistently annoyed at its performace (this trip being an exception). It seems that these things are very idiosyncratic and unpredictible, working in some cars better than others (thus some people swear by one brand, while others will badmouth that same brand). In large cities like New York and Philadelphia, the FM dial gets crowded and thus it's difficult to find a suitable station, further complicating matters. I think my living in a major city area combined with an awkward placement of the cigarrette lighter in my car (which I assume is a factor) makes it somewhat difficult to find a good station. What would be really useful would be a list of available stations and an attempt to figure out ways to troubleshoot your car's idiosyncracies. Perhaps a wiki would work best for this, though I doubt I'll be motivated enought to spend the time installing a wiki system here for this purpose (does a similar site already exist? I did a quick search but came up empty-handed). (There are kits that allow you to tap into your car stereo, but they're costly and I don't feel like paying more for that than I did for the player... )
Posted by Mark on September 10, 2006 at 09:15 PM .: link :.

End of This Day's Posts

Sunday, September 03, 2006

Does Magic Exist?
I'm back from my trip and it appears that the guest posting has fallen through. So a quick discussion on magic, which was brought up by a friend on a discussion board I frequent. The question: Does magic exist?

I suppose this depends on how you define magic. Arthur C. Clarke once infamously said that "Any sufficiently advanced technology is indistinguishable from magic." And that's probably true, right? If some guy can bend spoons with his thoughts, there's probably a rational explanation for it... we just haven't figured it out yet. Does it count as magic if we don't know how he's doing it? What about when we do figure out how he's doing it? What if it really was some sort of empirically observable telekinesis?

After all, magicians have been performing for hundreds of years, relying on slight of hand and misdirection1 (amongst other tricks of the trade). However, I suspect that's not the type of answer that's being sought.

One thing I think is interesting is the power of thought and how many religious and "magical" traditions were really just ways to harness thought in a productive fashion. For example, crystal balls are often considered to be a magical way to see the future. While not strictly true, it was found that those who look into crystal balls for a long period of time end up entering a sort of trance, similar to hypnosis, and the human mind is able to make certain connections it would not normally make2. Can such a person see the future? I doubt it, but I don't doubt that such people often experience a "revelation" of sorts, even if it is sometimes misguided.

However, you see something similar, though a lot more controlled and a lot less hokey, in a lot of religious traditions. For instance, take Christian Mass and prayer. Mass offers a number of repetitive aspects like singing combined with several chances for reflection and thought. I've always found that going to mass was very helpful in that it put things in a whole new perspective. Superficial things that worried me suddenly seemed less important and much more approachable. Repetitive rituals (like singing in Church) often bring back powerful feelings of the past, etc... further reinforcing the reflection from a different perspective.

Taking it completely out of the spiritual realm, I see very rational people doing the same thing all the time. They just aren't using the same vocabulary. When confronted with a particularly daunting problem, I'll work on it very intensely for a while. However, I find that it's best to stop after a bit and let the problem percolate in the back of my mind while I do completely unrelated things. Sometimes, the answer will just come to me, often at the strangest times. Occasionally, this entire process will happen without my intending it, but sometimes I'm deliberately trying to harness this subconscious problem solving ability. And I don't think I'm doing anything special here; I think everyone has these sort of Eureka! moments from time to time. Once you remove the theology from it, prayer is really a similar process.

Once I noticed this, I began seeing similar patterns throughout my life and even history. For example, Archimedes. He was tasked with determining whether a given substance was gold or not (at the time, this was a true challenge). He toiled and slaved at the problem for weeks, pushing all other aspects of his life away. Finally, his wife, sick of her husband's dirty appearance and bad odor, made him take a bath. As he stepped into the tub, he noticed the water rising and had a revelation... this displacement could be used to accurately measure volume, which could then be used to determine density and ultimately whether or not a substance was gold. The moral of the story: Listen to your wife!3

Have I actually answered the question? Well, I may have veered off track a bit, but I find the process of thinking to be interesting and quite mysterious. After all, whatever it is that's going on in our noggins isn't understood very well. It might just be indistinguishable from magic...

1 - Note to self: go see The Illusionist! Also, The Prestige looks darn good. Why does Hollywood always produce these things in pairs? At least it looks like there's good talent involved in each of these productions...

2 - Oddly enough, I discoved this nugget on another trip through the library stacks while I was supposed to be studying in college. Just thought I should call that out in light of recent posting...

3 - Yes, this is an anecdote from the movie Pi.
Posted by Mark on September 03, 2006 at 11:58 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, May 14, 2006

The Victorian Internet and Centralized Solutions
A few weeks ago, I wrote a post about how the internet affects our ability to think, pulling from Nicholas Carr's post on internet and mindlessness. I disagreed with Carr's skepticism, and in the comments, Samael noted that Carr was actually using a common form of argument.
This seems to be a pretty common form of argument, though.

If the advent of a technology tends to create a certain problem, what is to blame for that problem?

It's not much different from the "guns/video games/music/movies are to blame for violence" or "video games/television are to blame for short attention spans" or "junk food is responsible for obesity" arguments.
Carr's argument is in the same form - the sea of information made possible by the internet is to blame for a deterioration in our ability to think. I rejected that because of choice - technology does not force us to think poorly; we choose how we interact with technology (especially on-demand technology like the internet). It's possible to go overboard, but there's nothing forcing that to happen. It's our choice. In any case, this isn't the first time a technology that lead to a massive increase in communication caused these problems. In his book The Victorian Internet, Tom Standage explores the parallels between the telegraph networks of the nineteenth century and the internet of today. Jon Udell summarizes the similarities:
A 19th-century citizen transported to today would be amazed by air travel, Standage suggests, but not by the Internet. Been there, done that.

Multiprotocol routers? Check. Back then, they translated between Morse code and scraps of paper in canisters shot through pneumatic tubes.

Fraud? Check. Stock market feeds were being spoofed in the 1830s, back when the telegraph network ran on visual semaphores rather than electrical pulses.

Romance? Check. The first online marriage was really a telegraph marriage, performed not long after the dawn of electric telegraphy.

Continuous partial attention? Check. In 1848 the New York businessman W.E. Dodge was already feeling the effects of always-on connectivity: "The merchant goes home after a day of hard work and excitement to a late dinner, trying amid the family circle to forget business, when he is interrupted by a telegram from London."
All too often, when I listen to someone describe a problem, I feel a sensationalistic vibe. It's usually not that I totally disagree that something is a problem, but the more I read of history and the more I analyze certain issues, I find that much of what people are complaining about today isn't all that new. Yes, the internet has given rise to certain problems, but they're not really new problems. They're the same problems ported to a new medium. As shown in the quote above, many of the internet's problems also affected telegraphy nearly two centuries ago (I'd wager that the advance of the printing press lead to similar issues in its time as well). That doesn't make them less of a problem (indeed, it actually means that the problem is not easily solved!), but it does mean we should perhaps step back and maybe turn down the rhetoric a bit. These are extremely large problems and they're not easily solved.

It almost feels like we expect there to be a simple solution for everything. I've observed before that there is a lot of talk about problems that are incredibly complex as if they really aren't that complex. Everyone is trying to "solve" these problems, but as I've noted many times, we don't so much solve problems as we trade one set of problems for another (with the hope that the new set of problems is more favorable than the old). What's more, we expect these "solutions" to come at a high level. In politics, this translates to a Federal solution rather than relying upon state and local solutions. A Federal law has the conceit of being universal and fair, but I don't think that's really true. When it comes to large problems, perhaps the answer isn't large solutions, but small ones. Indeed, that's one of the great things about the structure of our government - we have state and local governments which (in theory) are more responsive and flexible than the Federal government. I think what you find with a centralized solution is something that attempts to be everything to everyone, and as a result, it doesn't help anyone.

For example, Bruce Schneier recently wrote about identity theft laws.
California was the first state to pass a law requiring companies that keep personal data to disclose when that data is lost or stolen. Since then, many states have followed suit. Now Congress is debating federal legislation that would do the same thing nationwide.

Except that it won't do the same thing: The federal bill has become so watered down that it won't be very effective. I would still be in favor of it -- a poor federal law is better than none -- if it didn't also pre-empt more-effective state laws, which makes it a net loss.
It's a net loss because the state laws are stricter. This also brings up another point about centralized systems - they're much more vulnerable to attack than a decentralized or distributed system. It's much easier to lobby against (or water down) a single Federal law than it is to do the same thing to 50 state laws. State and local governments aren't perfect either, but their very structure makes them a little more resilient. Unfortunately, we seem to keep focusing on big problems and proposing big centralized solutions, bypassing rather than taking advantage of the system our founding fathers wisely put into place.

Am I doing what I decry here? Am I being alarmist? Probably. The trend for increasing federalization is certainly not new. However, in an increasingly globalized world, I'm thinking that resilience will come not from large centralized systems, but at the grassroots level. During the recent French riots, John Robb observed:
Resilience isn't limited to security. It is also tied to economic prosperity. There aren't any answers to this on the national level. The answer is at the grassroots level. It is only at that level that you get the flexibility, innovation, and responsiveness to compete effectively. The first western country that creates a platform for economic interop and at the same time decentralizes power over everything else is going to be a big winner.
None of this is to say that grassroots efforts are perfect. There are a different set of issues there. But as I've observed many times in the past, the fact that there are issues shouldn't stop us. There are problems with everything. What's important is that the new issues we face be more favorable than the old...
Posted by Mark on May 14, 2006 at 08:13 PM .: Comments (0) | link :.

End of This Day's Posts

Saturday, May 13, 2006

Technology Link Dump
My last post on technological change seems to have struck a nerve and I've been running across a lot of things along similar lines this week... Here are a few links on the subject:
  • Charlie Stross is writing his next novel on his cell phone:
    Being inclined towards crazy stunt performances, I'm planning on writing "Halting State" on my mobile phone. This is technologically feasible because the phone in question has more memory and online storage than every mainframe in North America in 1972 (and about the same amount of raw processing power as a 1977-vintage Cray-1 supercomputer). It's a zeitgeist thing: I need to get into the right frame of mind, and I need to use a mobile phone for the same reason Neal Stephenson used a fountain pen when he wrote the Baroque cycle. Afters all, I want to stick my head ten years into the future. Personal computers are already pass�; sales are declining, performance is stagnating, the real action is all in the interstitial networked devices that keep washing up on the beaches of our bandwidth ocean, crazy-weird things like 3G phones and battery-powered network attached storage boxes and bluetooth-controlled vibrators. (It's getting weird out there in embedded intelligence land; the net is alive to the sound of pinging toasters, RFID chips are the latest virus target, and people are making business deals inside computer games.)
    I have yet to read one of Stross's novels, but he's in the queue...
  • Speaking of speculative fiction, Steven Den Beste has a post on Chizumatic (no permalinks, so you'll have to go a scrollin') about the difficulties faced in creating a plausible science fiction story placed in the future:
    1. Science and engineering now are expanding on an exponential curve.
    2. But not equally in all areas. In some areas they have run up against intransigent problems.
    3. Advances in one area can have incalculably large effects on other areas which at first seem completely unrelated.
    4. Much of this is driven by economic forces in ways which are difficult to predict or even understand after the fact.

    For instance, there was a period in which the main driver of technical advances in desktop computing was business use. But starting about 1994 that changed, and for a period of about ten years the bleeding edge was computer gamers. ...

    You look at the history of technological development and it becomes clear that it isn't possible for any person to predict it. I can tell you for sure that when we were working on the Arpanet at BBN in the 1980's, we didn't have the slightest clue as to what the Internet would eventually become, or all the ways in which it would be used. The idea of 8 megabit pipes into the home was preposterous in the extreme -- but that's what I've got. This is James Burke's "Connections" idea: it all relates, and serendipitous discoveries in one area can revolutionize other areas which are not apparently related in any way. How much have advances in powered machinery changed the lives and careers of farmers, for instance?

    With acceleration in development of new technologies, just what kind of advances could we really expect 200 years from now? The only thing that's certain is that it's impossible for us to guess. But if you posit interstellar travel by then, then there should be a lot of advances in other areas, and those advances may be used in "unimportant" ways to make life easier for people, and not just in big-ticket "obvious" ways.
    It's an excellent post and it ends on an... interesting note.
  • Shamus found an old 2001 article in PC Gamer speculating what computers would be like in 2006. It turns out that in some areas (like CPU speed), they were wildly overoptimistic, in other areas (broadband and portable devices), not so much.
  • Your Upgrade Is Ready: This popular mechanics article summarizes some advancements on the biological engineering and nanotechnology front.
    Weddell seals can stay underwater comfortably for more than an hour. As concrete-shoe wearers have discovered, humans can't make it past a few minutes. Why not? The seals don't have enormous lungs in comparison to humans--but they do have extraordinary blood, capable of storing great quantities of oxygen. Robert Freitas, a research fellow at the Institute of Molecular Manufacturing, has published a detailed blueprint for an artificial red blood cell, which he calls a respirocyte. Injected into the bloodstream, these superefficient oxygen-grabbers could put the scuba industry out of business.

    As Freitas envisions it, each respirocyte--a ball measuring a thousandth of a millimeter across--is a tiny pressurized gas tank. Inject the balls and they course through the blood vessels, releasing oxygen and absorbing carbon dioxide in the body's periphery and recharging themselves with oxygen in the lungs. Freitas says respirocytes would transport oxygen 236 times more efficiently than red blood cells--and a syringeful could carry as much oxygen as your entire bloodstream.
    I tend to take stuff like this with a grain of salt, as such overviews usually end up being a little more sensational than reality, but still interesting reading. [via Event Horizon]
That's all for now...
Posted by Mark on May 13, 2006 at 12:39 PM .: Comments (4) | link :.

End of This Day's Posts

Sunday, May 07, 2006

Is Technology Advancing or Declining?
In Isaac Asimov's novel Prelude to Foundation, an unknown mathematician named Hari Seldon travels from his podunk home planet to the Galactic Empire's capital world to give a presentation on a theoretical curiosity he dubs psychohistory (which is essentially a way to broadly predict the future). Naturally, the potential for this theory attracts the powerful, and Seldon goes on the run with the help of a journalist friend named Chetter Hummin. Hummin contends that "The Galactic Empire is Dying." Seldon is frankly surprised by this thesis and eventually asks for an explanation:
... "all over the Galaxy trade is stagnating. People think that because there are no rebellions at the moment and because things are quiet that all is well and that the difficulties of the past few centuries are over. However, political infighting, rebellions, and unrest are all signs of a certain vitality too. But now there's a general weariness. It's quiet, not because people are satisfied and prosperous, but because they're tired and have given up."

"Oh, I don't know," Seldon said dubiously.

"I do. And the Antigrav phenomenon we've talked about is another case in point. We have a few gravitic lifts in operation, but new ones aren't being constructed. It's an unprofitable venture and there seems no interest in trying to make it profitable. The rate of technological advance has been slowing for centuries and is down to a crawl now. In some cases, it has stopped altogether. Isn't this something you've noticed? After all, you're a mathematician."

"I can't say I've given the matter any thought."
Hummin acknowledges that he could be wrong (partly out of a desire to manipulate Seldon to develop psychohistory so as to confirm whether or not the Empire really is dying), but those who've read the Foundation Novels know he's right.

The reasons for this digression into decaying Galactic Empires include my affinity for quoting fiction to make a point and a post by Ken at ChicagoBoyz regarding technological stagnation (which immediately made me think of Asimov's declining Empire). Are we in a period of relative technological stagnation? I'm hardly an expert, but I have a few thoughts.

First, what constitutes advance or stagnation? Ken points to a post that argues that the century of maximum change is actually the period 1825-1925. It's an interesting post, but it only pays lipservice to the changes he sees occurring now:
From time to time I stumble across articles by technology-oriented writers claiming that we're living in an era of profound, unprecedented technological change. And their claim usually hinges on the emergence of the computer.

Gimme a break.

I'll concede that in certain areas such as biology and medicine, changes over the past few decades have been more profound than at any time in history. And true, computers have made important changes in details of our daily lives.

But in those daily life terms, the greatest changes happened quite a while ago.
The post seems to focus on disruptive changes, but if something is not disruptive, does that really mean that technology is not advancing? And why are changes in transportation capabilities (for instance) more important than communication, biology, or medicine? Also, when we're talking about measuring technological change over a long period of time, it's worth noting that advances or declines would probably not move in a straight line. There would be peaks where it seems like everything is changing at once, and lulls when it seems like nothing is changing (even though all the pieces may be falling into place for a huge change).

Most new technological advances are really abstracted efficiencies - it's the great unglamorous march of technology. They're small and they're obfuscated by abstraction, thus many of the advances are barely noticed. Computers and networks represent a massive improvement in information processing and communication capabilities. I'd wager that even if we are in a period of relative technological stagnation (which I don't think we are), we're going to pull out of it in relatively short order because the advent of computers and networks means that information can spread much faster than it could in the past. A while ago, Steven Den Beste argued that the four most important inventions in history are: "spoken language, writing, movable type printing and digital electronic information processing (computers and networks)."
When knowledge could only spread by speech, it might take a thousand years for a good idea to cross the planet and begin to make a difference. With writing it could take a couple of centuries. With printing it could happen in fifty years. With computer networks, it can happen in a week if not less. ... That's a radical change in capability; a sufficient difference in degree to represent a difference in kind. It means that people all over the world can participate in debate about critical subjects with each other in real time.

We're already seeing some of the political, technological and cultural effects of the Internet, and this is just a start. What this means is that drastic cultural shakeout cannot be avoided. The next fifty years are going to be a very interesting time as the Internet truly creates the Global Village.
Indeed, part of the reason technologists are so optimistic about the rate of technological change is that we see it all the time on the internet. We see some guy halfway across the world make an observation or write a script, and suddently it shows up everywhere, spawning all sorts of variants and improvements. When someone invents something these days, it only takes a few days for it to be spread throughout the world and improved upon.

Of course, there are many people who would disagree with Ken's assertion that we're in a period of technological stagnation. People like Ray Kurzweil or Vernor Vinge would argue that we're on the edge of a technological singularity - that technology is advancing so quickly that we can't quantify it, and that we're going to eventually use technology to create an entity with greater than human intelligence.

I definitely think there is a problem with determining the actual rate of change. As I mentioned before, what qualifies as a noteworthy change? It's also worth noting that long-term technological effects are sometimes difficult to forecast. Most people picture the internet as being a centrally planned network, but it wasn't. Structurally, the internet is more like an evolving ecosystem than anything that was centrally designed. Those who worked on the internet in the 1960s and 1970s probably had no idea what it would eventually become or how it would affect our lives today. And honestly, I'm not sure we know today what it will be like in another 30 years...

One of the reasons I quoted Asimov's novel at the beginning of this post is that I think he captured what a technologically declining civilization would be like. The general weariness, the apathy, and the lack of desire to even question why. Frankly, I find it hard to believe that things are slowing down these days. Perhaps we're in a lull (it sure doesn't seem like it though), but I can see that edge, and I don't see weariness in those that will take us there...
Posted by Mark on May 07, 2006 at 06:59 PM .: link :.

End of This Day's Posts

Thursday, February 09, 2006

Unintended Customers
The Art of Rainmaking by Guy Kawasaki: An interesting article about salesmanship and what is referred to as "rainmaking." Kawasaki lists out several ways to practice the art of rainmaking, but this first one caught my eye because it immediately reminded me of Neal Stephenson's Cryptonomicon, and regular readers (all 5 of you) know I can't resist a Stephenson reference.
“Let a hundred flowers blossom.” I stole this from Chairman Mao although I'm not sure how he implemented it. In the context of capitalism (Chairman Mao must be turning over in his grave), the dictum means that you sow seeds in many markets, see what takes root, and harvest what blooms. Many companies freak out when unintended customers buy their product. Many companies also freak out when intended customers buy their product but use it in unintended ways. Don't be proud. Take the money.
This immediately reminded me of the data haven (a secure computer system that is protected by it's lack of governmental oversight as well as technical means like encryption) in the "modern-day" segments of Cryptonomicon. Randy Waterhouse works for the company that's attempting to sett up a data haven, and he finds that the most of his customers want to use the data haven to store money. Pretty straightforward, right? Well, most of the people who want to store their money their are criminals of the worst sort. I guess in that particular case, there is reason to freak out at these unexpected customers, but I thought the reference was interesting because while there may be lots of legitimate uses for a data haven, the criminal element would almost certainly be attracted to a way to store their drug money (or whatever) with impugnity (that and probably spam, pornography, and gambling). Like all advances in technology, the data haven could be used for good or for ill...
Posted by Mark on February 09, 2006 at 11:03 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, February 05, 2006

A Spectrum of Articles
When you browse the web often, especially when you're looking at mostly weblogs, you start to see some patterns emerging. A new site is discovered, then propagates throughout the blogosphere in fairly short order. I'm certainly no expert at spotting such discoveries, but one thing I've noticed being repeatedly referenced this past week is the IEEE Spectrum (a magazine devoted to electrical engineering). I've seen multiple blogs referencing multiple articles from this magazine, though I can't think of a single reference in the past. Here's a few articles that seem interesting:
  • Re-engineering Iraq (February 2006): A close look at rebuilding Iraq's electrical system. Alas, no mentions of anything resembling Operation Solar Eagle... (don't remember who posted about this one, but I did see it in a couple of places).
  • How Europe Missed The Transistor (November 2005): One of the most important inventions of the 20th century (which is no slouch when it comes to important inventions) was the transistor. This article delves into the early history of the transistor and similar technologies developed in Europe and the U.S., as well as how these devices became commercially successful. David Foster has an excellent post about the "importance of decentralization and individual entrepreneurship" in facilitating the commercialization of new technologies.
  • Patents 2.0 (February 2006): Slashdot posted about this interesting proposal recently: "a new type of patent that wouldn't require formal examination, would cost significantly less than traditional patents, would last only 4 years from date of first commercial product, and which wouldn't carry a presumption of validity." Interesting stuff. It does appear that the high rate of technological advance should be driving the implementation of something like this when it comes to both patents and copyright law.
I haven't read all of this yet, but there's definitely good stuff there. Perhaps more comments later this week (time is still short, but my schedule will hopefully be opening up a bit in the next few weeks).
Posted by Mark on February 05, 2006 at 11:43 PM .: Comments (3) | link :.

End of This Day's Posts

Sunday, January 01, 2006

Analysis and Ignorance
A common theme on this blog is the need for better information analysis capabilities. There's nothing groundbreaking about the observation, which is probably why I keep running into stories that seemingly confirms the challenge we're facing. A little while ago, Boing Boing pointed to a study on "visual working memory" in which the people who did well weren't better at remembering things than other people - they were better at ignoring unimportant things.
"Until now, it's been assumed that people with high capacity visual working memory had greater storage but actually, it's about the bouncer – a neural mechanism that controls what information gets into awareness," Vogel said.

The findings turn upside down the popular concept that a person's memory capacity, which is strongly related to intelligence, is solely dependent upon the amount of information you can cram into your head at one time. These results have broad implications and may lead to developing more effective ways to optimize memory as well as improved diagnosis and treatment of cognitive deficits associated with attention deficit disorder and schizophrenia.
In Feedback and Analysis, I examined an aspect of how the human eye works:
So the brain gets some input from the eye, but it's sending significantly more information towards the eye than it's receiving. This implies that the brain is doing a lot of processing and extrapolation based on the information it's been given. It seems that the information gathering part of the process, while important, is nowhere near as important as the analysis of that data. Sound familiar?
Back in high school (and to a lesser extent, college), there were always people who worked extremely hard, but still couldn't manage to get good grades. You know, the people who would spend 10 hours studying for a test and still bomb it. I used to infuriate these people. I spent comparatively little time studying, and I did better than them. Now, there were a lot of reasons for this, and most of them don't have anything to do with me being smarter than anyone else. One thing I found was that if I paid attention in class, took good notes, and spent an honest amount of effort on homework, I didn't need to spend that much time cramming before a test (shocking revelation, I know). Another thing was that I knew what to study. I didn't waste time memorizing things that weren't necessary. In other words, I was good at figuring out what to ignore.

Analysis of the data is extremely important, but you need to have the appropriate data to start with. When you think about it, much of analysis is really just figuring out what is unimportant. Once you remove the noise, you're left with the signal and you just need to figure out what that signal is telling you. The problem right now is that we keep seeing new and exciting ways to collect more and more information withought a corresponding increase in analysis capabilities. This is an important technical challenge that we'll have to overcome, and I think we're starting to see the beginnings of a genuine solution. At this point another common theme on this blog will rear its ugly head. Like any other technological advance, systems that help us better analyze information will involve tradeoffs. More on this subject later this week...
Posted by Mark on January 01, 2006 at 10:55 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, December 11, 2005

More Trilemmas
Looking into the trilemma subject from last week's entry, I stumbled across Jason Kottke's post about what he calls a "Pick Two" system, using the "good, fast, or cheap, pick two" example to start, but then listing out a whole bunch more:
Elegant, documented, on time.
Privacy, accuracy, security.
Have fun, do good, stay out of trouble.
Study, socialize, sleep.
Diverse, free, equal.
Fast, efficient, useful.
Cheap, healthy, tasty.
Secure, usable, affordable.
Short, memorable, unique.
Cheap, light, strong.
I don't know if I agree with all of those, but regardless of their authenticity, Kottke is right to question why the "Pick Two" logic appears to be so attractive. Indeed, I even devised my own a while back when I was looking at my writing habits.
Why is "pick two out of three" the rule? Why not "one out of two" or "four out of six"? Or is "pick two out of three" just a cultural assumption?
He also wonders if there is some sort of underlying scientific or economic relationship at work, but was unable to find anything that fit really well. Personally, I found the triangle to be closest to what he was looking for. In a triangle, the sum of the interior angles is always 180 degrees. If you "pick two" of the angles, you know what the third will be. Since time and money are both discrete, quantifiable values, you should theoretically be able to control the quality of your project by playing with those variables.

In a more general sense, I tend to think of a system with three main components as being inherently stable. I think this is because such a system is simple, yet complex enough to allow for a lot of dynamism. As one of the commmenters on Kottke's post noted:
Seems like two out of three is the smallest tradeoff that's interesting. One out of two is boring. One out of three doesn't satisfy. Two out of three allows the chooser to feel like s/he is getting something out of the tradeoff (not just 50/50).
And once you start getting larger than three, the system begins to get too complex. Tweaking one part of the system has progressively less and less predictable results the bigger the system gets. The good thing about a system with three major components is that if one piece starts acting up, the other two can adjust to overcome the deficiency. In a larger system, the potential for deadlock and unintended consequences begins to increase.

I've written about this stability of three before. The steriotypical example of a triangular system is the U.S. Federal government:
One of the primary goals of the American Constitutional Convention was to devise a system that would be resistant to tyranny. The founders were clearly aware of the damage that an unrestrained government could do, so they tried to design the new system in such a way that it wouldn't become tyrannical. Democratic institions like mandatory periodic voting and direct accountability to the people played a large part in this, but the founders also did some interesting structural work as well.

Taking their cue from the English Parliament's relationship with the King of England, the founders decided to create a legislative branch separate from the executive. This, in turn, placed the two governing bodies in competition. However, this isn't a very robust system. If one of the governing bodies becomes more powerful than the other, they can leverage their advantage to accrue more power, thus increasing the imbalance.

A two-way balance of power is unstable, but a three-way balance turns out to be very stable. If any one body becomes more powerful than the other two, the two usually can and will temporarily unite, and their combined power will still exceed the third. So the founders added a third governing body, an independent judiciary.

The result was a bizarre sort of stable oscillation of power between the three major branches of the federal government. Major shifts in power (such as wars) disturbed the system, but it always fell back to a preferred state of flux. This stable oscillation turns out to be one of the key elements of Chaos theory, and is referred to as a strange attractor. These "triangular systems" are particularly good at this, and there are many other examples...
Another great example of how well a three part system works is a classic trilemma: "Rock, Paper, Scissors."
Posted by Mark on December 11, 2005 at 02:14 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, December 04, 2005

The Design Trilemma
I've been writing about design and usability recently, including a good example with the iPod and a case where a new elevator system could use some work. Naturally, there are many poorly designed systems out there, and they're easy to spot, but even in the case of the iPod, which I think is well designed and elegant, I was able to find some things that could use improvement. Furthermore, I'm not sure there's all that much that can really be done to improve the iPod design without removing something that detracts more from the experience. As I mentioned in that post, a common theme on this blog has always been the trade-offs inherent in technological advance: we don't so much solve problems as we trade one set of disadvantages for another, in the hopes that the new set is more favorable than the old.

When confronted with an obviously flawed system, most people's first thought is probably something along the lines of: What the hell were they thinking when they designed this thing? Its certainly an understandable lamentation, but after the initial shock of the poor experience, I often find myself wondering what held the designers back. I've been involved in the design of many web applications, and I sometimes find the end result is different from what I originally envisioned. Why? Its usually not that hard to design a workable system, but it can become problematic when you consider how the new system impacts existing systems (or, perhaps more importantly, how existing systems impact new ones). Of course, there are considerations completely outside the technical realm as well.

There's an old engineering aphorism that says Pick two: Fast, Cheap, Good. The idea is that when you're tackling a project, you can complete it quickly, you can do it cheaply, and you can create a good product, but you can't have all three. If you want to make a quality product in a short period of time, it's going to cost you. Similarly, if you need to do it on the cheap and also in a short period of time, you're not going to end up with a quality product. This is what's called a Trilemma, and it has applications ranging from international economics to theology (I even applied it to writing a while back).

Dealing with trilemmas like this can be frustrating when you're involved in designing a system. For example, a new feature that would produce a tangible but relatively minor enhancement to customer experience would also require a disproportionate amount of effort to implement. I've run into this often enough to empathize with those who design systems that turn out horribly. Not that this excuses design failures or that this is the only cause of problems, but it is worth noting that the designers aren't always devising crazy schemes to make your life harder...
Posted by Mark on December 04, 2005 at 07:55 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, November 13, 2005

iPod Usability
After several weeks of using my new iPod (yes, I'm going to continue rubbing it in for those who don't have one), I've come to realize that there are a few things that are *gasp* not perfect about the iPod. A common theme on this blog has always been the tradeoffs inherent in technological advance: we don't so much solve problems as we trade one set of disadvantages for another, in the hopes that the new set is more favorable than the old.

Don't get me wrong, I love the iPod. It represents a gigantic step forward in my portable media capability, but it's not perfect. It seems that some of the iPod's greatest strengths are also it's greatest weaknesses. Let's look at some considerations:
  • The Click Wheel - Simultaneously the best and worst feature of the iPod. How can this be? Well the good things the click wheel brings to the picture far outweigh the bad. What's so bad about the click wheel? I think the worst thing about it is the lack of precision. The click wheel is very sensitive and it is thus very easy to overshoot your desired selection. If you're sitting still at a desk, this isn't that much of a problem, but if you're exercising or driving, it can be a bit touchy. It's especially tricky with volume, as I sometimes want to increase the volume just a tick, but often overshoot and need to readjust. However, Apple does attempt to mitigate some of that with the "clicks," the little sounds generated as you scroll through your menu options. As I say, the good things about the click wheel far outweigh this issue. More on the good things in a bit.
  • The "clean" design - As Gerry Gaffney observed in a recent article for The Age:
    When products are not differentiated primarily by features and prices are already competitive, factors such as ease-of-use and emotional response can provide a real edge. The Apple iPod is often cited as an example; a little gadget that combines relative ease of use with a strong emotional response. This helps separate the iPod from the swathe of other portable players that are comparable in terms of features and price.
    There are two main pieces to the design of the iPod in my mind, one is the seamless construction and the other is the simplicity of the design. The seamlessness of the device and it's simple white or black monochrome appearance defintely provides the sort of emotional response that Gaffney cites. But it might be even more than that - some people believe that the design is so universally accepted as "Clean" because the materials it uses evoke a subconscious feeling of cleanliness:
    Of course, we were aware of the obvious cues such as minimalist design; the simple, intuitive interface; the neutral white color. But these attributes alone inadequately explain this seemingly universal perception. It had to be referencing a deeper convention in the social consciousness… so, if a designer claimed that he had the answer—we were all ears.

    “So… as I was sitting on the toilet this morning” (this is of course where most good ideas come from), “I noticed the shiny white porcelain of the bathtub and the reflective chrome of the faucet on the wash basin… and then it hit me! Everybody perceives the iPod as ‘clean’ because it references bathroom materials!”
    The author also noticed that seamless design and a lack of moving parts is often used in science-fiction to indicate advanced technology (think "Monolith" from 2001). Obviously, a "Clean" design doesn't necessarily make it better or more usable, but good design often bundles clean with easy-to-use, and in the iPod, the two are inseparable. The click wheel's lack of precision notwithstanding, it's actually quite easy to use for the most common tasks. It's also ambidexterous and easy to use whether you are left or right-handed. Some devices have lots of buttons and controls, which can be useful at times, but the iPod covers the absolutely necessary features extremely well with a minimum of actual physical controls. What's more, this economy of physical buttons does not detract from the usability, it actually increases it because the controls are so simple and intuitive. In the end, it looks great and is easy to use. What more can you ask for?
  • One thing I enjoy about the iPod is using it's shuffle songs feature. Now that I've got most of my library in one device, I enjoy hearing random songs, one after the other. Sometimes it makes for great listening, sometimes appalling, but always interesting. However, there is one feature I'd like to see: if I'm listening to one song, and I want to "break out" of the shuffle (and listen to the next song on that particular album), there's no way to do so short of navigating to that album and then playing the next song manually (at least, I don't know of a way to do so - perhaps there is a not-so-intuitive way to do it, which wouldn't be surprising, as I imagine this is a somewhat obscure request). Perhaps it's just that I like to listen to albums that have tracks that seamlessly run into one another, the prototypical example being Pink Floyd's Dark Side of the Moon - the last 4 songs have a seamless quality that I really like to listen to as a whole, but which can be jarring if I only hear one of them.
  • This usability critique of the iPod makes mention of several of the above points, as well as some other good and bad features of the iPod:
    In Rob Walker’s New York Times Magazine article, "The Guts of a New Machine", Steve Jobs stated. "Most people make the mistake of thinking design is what it looks like,...That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works."
    He mentions the same lack of precision issue I mentioned above, and also something about the blacklight being difficult to turn on or off (which is something that I imagine is only a probem for the non-color screens).
In many ways, the iPod is very similar to it's competing players. It has comparable features and price, and I'm quite sure that, even though the iPod's usability is excellent, its competitors probably aren't that far off. But there is definitely something more to the iPod and it's design, and it's difficult to place. There seems to be a large contingency of people who are extremely hostile towards the iPod (probably for this very reason), insisting that people who like the iPod are some sort of brainwashed morons who are paying extra only for the privelege of having a player with a piece of fruit engraved on it. Perhaps, but even with the issues I cited above, the iPod has exceeded my expectations rather well.
Posted by Mark on November 13, 2005 at 07:44 PM .: Comments (4) | link :.

End of This Day's Posts

Sunday, November 06, 2005

Elevators & Usability
David Foster recently wrote a post about a new elevator system:
One might assume that elevator technology is fairly static, but then one would be wrong. The New York Times (11/2) has an article about significant improvements in elevator control systems. The idea is that you select your floor before you get on the elevator, rather than after, thereby allowing the system to dispatch elevators more intelligently--a 30% reduction in average trip time is claimed. ... All good stuff; shorter waiting times and presumably lower energy consumption as well.
(NYT article is here) Foster has some interesting comments on the management types who want to use this system to avoid being in an elevator with the normal folks, but the story caught my attention from a different angle.

I recently attended the World Usability Day event in Philadelphia, and the keynote speaker (Tom Tullis, of Fidelity Investments) started his presentation with a long anecdote concerning this new elevator technology. It seems that while this technology may have good intentions, it's execution could use a little work.

Elevator KeypadPerhaps it was just the particular implementation at the building he went to, but the system installed there was extremely difficult to use for a first time user. First, the new system wasn't called out very much, so Tullis had actually gotten into one of the elevators and was flummoxed at the lack of buttons inside. Eventually, after riding the elevator up and then back down to the lobby, he noticed a keypad next to the elevator he had gotten into. So he understandably assumed that he should simply enter the desired floor there, figuring that the elevator would then open and take him to that floor. He typed in his destination floor, and was greeted with a scren that had a large "E" on it (there's an image of this on the right, but the presentation has lots of images and more information on the evolution of the Elevator). Obviously an error, right? Well, no. Tullis eventually found a little sign in the lobby that had a 6 page (!) manual explaining how the elevators work, and it turns out that each elevator cab has a letter assigned to it, and when you enter your floor, it assigns you to one of the elevators. So "E" was referring to the "E" cab, not an error. Now armed with the knowledge of how the system works, Tullis was able to make it to his meeting (10 minutes late).

Naturally, I think this is a bit of an extreme case (though there were a few other bad things about his experience that I didn't even mention). The system was brand new and the building hadn't yet converted all of their elevators to the new system, so it seems obvious that the system usability would improve over time. There are several things that could make that experience easier:
  • In the image above, note the total lack of any directions whatsoever. It's especially bad because the placement of the keypad implies that it only applies to the elevator it's next to.
  • Depending on the layout of the elevator area, I think the best way to do this would be to have a choke point with a little podium that has the keypad and a concise list of instructions. This would force the user to see the system before they actually get to the elevators.
  • Once you use the system once and figure out how it works, it's probably much better, especially if all of the claimed efficiencies work out the way they sound.
  • As the NYT article notes, there are some other issues that need to be dealt with. For instance, most groups would naturally like to ride in the same elevator, but this presents a problem to this system, especially when only one person in the group actually uses the system. There's also some frustration with not being able to get on the first available elevator, though that may be mitigated by an elevator ride with less stops. You also can't change your mind once you get in the elevator...
  • It seems to me that this sort of system would be ideally suited to an extremely large skyscraper with a high volume of traffic (like a hotel). Most elevators probably wouldn't need to be converted, which means that most people wouldn't be exposed to this sort of thing until they make it to one of the larger buildings (which also means that the usability for first time users will still be quite important, even though it gets easier to use after your first time).
Posted by Mark on November 06, 2005 at 08:12 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, October 23, 2005

MP3 Player Update
About a month ago, I wrote about MP3 Players in an attempt to figure out which player was best for me. At the time, I was leaning towards the 20GB iPod Photo, but the Cowon iAudio X5 was giving me serious pause. As such, I sort of just spun my wheels until I heard that Apple was going to announce another change to their iPod line, which ended up being the new iPod Video. This upgrade to the iPod line made my decision a lot easier, and I bought one the night it was announced. It seems that procrastination actually paid off for me.

After 5 days of steady use, I'm quite pleased with the iPod. It's easy to use, elegant, and it does everything I need it to do (and more). ArsTechnica has a thorough review, and I won't bother repeating most of it. The one thing I'll talk about is the "scratching" issue (as the Ars reviewer didn't mention much about that), which seems to be so bad with the iPod nano that many are assuming that the new black iPods will suffer from the same issue. So far, I've yet to get any scratches on my shiny new black iPod, but I have to admit that I'm a careful guy and I generally keep it in the soft carrying case that came with it when I'm not using it. The black model does seem to make fingerprints and the like much more visable, but that's not that big of a deal to me, as it cleans up easy.

The battery life seems excellent for playing music, but it may be a bit lacking when it comes to video. The 30GB model only has 2 hours of video playback, which would be enough for a short movie during a flight, but that's a mixed blessing in my mind, as I wouldn't then be able to listen to music for the remainder of a longer flight... I did download an episode of Lost, and the video itself does appear crisp and clear and surprisingly watchable (considering the relatively small size of the screen). It only plays .m4v files, which is mildly annoying, as most applications (by which I mean the ones I was able to find with 2 minutes research) that encode in .m4v are only for the Mac. Evan Kirchhoff did an interesting comparison on his blog: Video ITunes vs. Piracy. The ITunes version downloaded faster and took up less space, but was also lower quality (in terms of both video and audio) and the compression wasn't as good either (and the pirated version was also widescreen). I think this is indicative of the fact that the new iPod isn't really the Video iPod, it's an iPod with video. Because of the small screen size, tiny CPU, and limited storage, I think the ITunes downloads make sense right now. As time goes on, I'm sure we'll see more advanced offerings, including higher quality downloads (perhaps even multiple encodings). In any case, the video functionality wasn't that important to me, but it is quite a nice perk (and it may come in useful at some point).

As for getting the iPod up and running in my car, I chose the Monster Cable iCarPlay Wireless FM Transmitter. I've had less time to evaluate this, but so far I've gotten a mediocre and uneven performance out of this. Sometimes it's excellent, but sometimes there is a lot of static (and changing stations doesn't seem to help). Part of the problem is that I'm in the Philadelphia area, so there aren't very many available stations (so far, 105.9 seems to work best for me). I suspect this is about as good as a FM transmitter of any kind would get for me, and I like the Monster's setup (3 preset stations) and when it's working well, it works really well. Naturally, one of those hard-wired systems that ties the ipod into your stereo controls would be ideal, but they're a bit too expensive ($200+) right now.

All in all, I'm quite happy with my new iPod...
Posted by Mark on October 23, 2005 at 08:15 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, October 16, 2005

Operation Solar Eagle
One of the major challenges faced in Iraq is electricity generation. Even before the war, neglect of an aging infrastructure forced scheduled blackouts. To compensate for the outages, Saddam distributed power to desired areas, while denying power to other areas. The war naturally worsened the situation (especially in the immediate aftermath, as there was no security at all), and the coalition and fledgling Iraqi government have been struggling to restore and upgrade power generation facilities since the end of major combat. Many improvements have been made, but attacks on the infrastructure have kept generation at or around pre-war levels for most areas (even if overall generation has increased, the equitable distribution of power means that some people are getting more than they used to, while others are not - ironic, isn't it?).

Attacks on the infrastructure have presented a significant problem, especially because some members of the insurgency seem to be familiar enough with Iraq's power network to attack key nodes, thus increasing the effects of their attacks. Consequently, security costs have gone through the roof. The ongoing disruption and inconsistency of power generation puts the new government under a lot of pressure. The inability to provide basic services like electricity delegitimizes the government and makes it more difficult to prevent future attacks and restore services.

When presented with this problem, my first thought was that solar power may actually help. There are many non-trivial problems with a solar power generation network, but Iraq's security situation combined with lowered expectations and an already insufficient infrastructure does much to mitigate the shortcomings of solar power.

In America, solar power is usually passed over as a large scale power generation system, but things that are problems in America may not be so problematic in Iraq. What are the considerations?
  • Demand: One of the biggest problems with solar power is that it's difficult to schedule power generation to meet demand (demand doesn't go down when the sun does, nor does demand necessarily coincide with peak generation), and a lot of energy is wasted because there isn't a reliable way to store energy (battery systems help, but they're not perfect and they also drive up the costs). The irregularity in generation isn't as bad as wind, but it is still somewhat irregular. In America, this is a deal breaker because we need power generation to match demand, so if we were to rely on solar power on a large scale, we'd have to make sure we have enough backup capacity running to make up for any shortfall (there's much more to it than that, but that's the high-level view). In Iraq, this isn't as big of a deal. The irregularity of conventional generation due to attacks on infrastructure is at least comparable if not worse than solar irregularity. It's also worth noting that it's difficult to scale solar power to a point where it would make a difference in America, as we use truly mammoth amounts of energy. Iraq's demands aren't as high (both in terms of absolute power and geographic distribution), and thus solar doesn't need to scale as much in Iraq.
  • Economics: Solar power requires a high initial capital investment, and also requires regular maintenance (which can be costly as well). In America, this is another dealbreaker, especially when coupled with the fact that its irregular nature requires backup capacity (which is wasteful and expensive as well). However, in Iraq, the cost of securing conventional power generation and transmission is also exceedingly high, and the prevalence of outages has cost billions in repairs and lost productivity. The decentralized nature of solar power thus becomes a major asset in Iraq, as solar power (if using batteries and if connected to the overall grid) can provide a seamless interruptible supply of electricity. Attacks on conventional systems won't have quite the impact they once did, and attacks on the solar network won't be anywhere near as effective (more on this below). Given the increased cost of conventional production (and securing that production) in Iraq, and given the resilience of such a decentralized system, solar power becomes much more viable despite its high initial expense. This is probably the most significant challenge to overcome in Iraq.
  • Security: There are potential gains, as well as new potential problems to be considered here. First, as mentioned in the economics section, a robust solar power system would help lessen the impact of attacks on conventional infrastructure, thus preventing expensive losses in productivity. Another hope here is that we will see a corresponding decrease in attacks (less effective attacks should become less desirable). Also, the decentralized nature of solar power means that attacks on the solar infrastructure are much more difficult. However, this does not mean that there is no danger. First, even if attacks on conventional infrastructure decrease, they probably won't cease altogether (though, again, the solar network could help mitigate the effects of such attacks). And there is also a new problem that is introduced: theft. In Iraq's struggling economy, theft of solar equipment is a major potential problem. Then again, once an area has solar power installed, individual homeowners and businesses won't be likely to neglect their most reliable power supply. Any attacks on the system would actually be attacks on specific individuals or businesses, which would further alienate the population and decrease the attacker's. However, this assumes that the network is already installed. Those who set up the network (most likely outsiders) will be particularly vulnerable during that time. Once installed, solar power is robust, but if terrorists attempt to prevent the installation (which seems likely, given that the terrorists seem to target many external companies operating in Iraq with the intention of forcing them to leave), that would certainly be a problem (at the very least, it would increase costs).
  • Other Benefits: If an installed solar power network helps deter attacks on power generation infrastructure, the success will cascade across several other vectors. A stable and resilient power network that draws from diverse energy sources will certainly help improve Iraq's economic prospects. Greater energy independence and an improved national energy infrastructure will also lend legitimacy to the new Iraqi government, making it stronger and better able to respond to the challenges of rebuilding their country. If successful and widespread, it could become one of the largest solar power systems in the world, and much would be learned along the way. This knowledge would be useful for everyone, not just Iraqis. Obviously, there are also environmental benefits to such a system (and probably a lack of bureaucratic red-tape like environmental impact statements as well. Indeed, while NIMBY might be a problem in America, I doubt it would be a problem in Iraq, due to their current conditions).
In researching this issue, I came across a recent study prepared at the Naval Postgraduate School called Operation Solar Eagle. The report is excellent, and it considers most of the above, and much more (in far greater detail as well). Many of my claims above are essentially assumptions, but this report provides more concrete evidence. One suggestion they make with regard to the problem of theft is to use an RFID system to keep track of solar power equipment. Lots of other interesting stuff in there as well.

As shown above, there are obviously many challenges to completing such a project, most specifically with respect to economic feasibility, but it seems to me to be an interesting idea. I'm glad that there are others thinking about it as well, though at this point it would be really nice to see something a little more concrete (or at least an explanation as to why this wouldn't work).
Posted by Mark on October 16, 2005 at 08:52 PM .: Comments (2) | link :.

End of This Day's Posts

Sunday, September 25, 2005

Feedback and Analysis
Jon Udell recaps some of the events from the Accelerating Change conference. Lots of interesting info on the Singularity theory, as both Vernor Vinge and Ray Kurzweil were in attendance, but what caught my eye was this description of how the eye works with the brain:
The example was a six-layered column in the neocortex connected to a 14x14-pixel patch of the retina. There are, Olshausen said, about 100,000 neurons in that chunk of neocortex. That sounds like a lot of circuitry for a few pixels, and it is, but we actually have no idea how much circuitry it is. ...

We are, however, starting to sort out the higher-level architecture of these cortical columns. And it's fascinating. At each layer, signals propagate up the stack, but there's also a return path for feedback. Focusing on the structure that's connected directly to the 14x14 retinal patch, Olshausen pointed out that the amount of data fed to that structure by the retina, and passed up the column to the next layer, is dwarfed by the amount of feedback coming down from that next layer. In other words, your primary visual processor is receiving the vast majority of its input from the brain, not from the world.
I found this quite simply amazing. The folks at the conference were interested in this because it means we're that much closer to understanding, and thus being able to artificially reproduce, the brain. However, this has other implications as well.

So the brain gets some input from the eye, but it's sending significantly more information towards the eye than it's receiving. This implies that the brain is doing a lot of processing and extrapolation based on the information it's been given. It seems that the information gathering part of the process, while important, is nowhere near as important as the analysis of that data. Sound familiar? Honestly, I haven't been keeping track of intelligence agencies of late, but the focus on data gathering without a corresponding focus on analysis certainly used to be a problem, and I think this finding is just another piece of evidence that says we need to focus on analysis.

This also applies to the business world. Lots of emphasis is placed on collecting sales data, especially on the internet, but unless you have a large dedicated staff to analyze that data, you won't end up with much in the way of actionable conclusions...
Posted by Mark on September 25, 2005 at 05:31 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, September 18, 2005

MP3 Players
So I have recently come into the market for an MP3 Player. I know, probably a few years too late, but I figured it's time to take the plunge, as the CD changer in my car decided to stop working and a few hours of listening to the dreck that is referred to as "radio" these days is enough to motivate me to spend tons of money to just make the pain stop.

So the primary goal for this device is going to be an MP3 Player. Naturally, there are all sorts of other features and gadgets that come along with most of the good players on the market, but I consider most of that stuff to be nice to have, but not a necessity. There has to be a way to get the player working in my car (I'm not too picky about that - those FM transmitters should do the trick) and I'll probably be carting the thing around everywhere as well. Rather than run through all the features, I'll run through the candidates and their features. As of now, I'm leaning towards the 20GB iPod Photo.
  • 4GB iPod Nano: I started looking at players just a few days before Apple announced the Nano, and I have to admit that it gave me pause. It is quite different from the other players in this list, and it certainly has a lot going for it, but the 4GB storage space is just too small. Well, it's certainly an improvement on my current situation, and this little player certianly has a lot going for it, but there was one other factor that makes me hesitate on this, and that's the forthcoming iPod phone that is speculated to be in development (i.e. not the Motorolla ROKR.) I'm guessing they could pack a few gigs onto the phone, and that would be a nice supplement to the full blown players I'm considering below (and it's worth noting that this need not be an iPod phone - Sony has some interesting Walkman branded phones right now. With a little improvement, they could be mighty attractive). Still, a part of me really wants a Nano, so I guess there's still a small chance I'll end up with one... (The physical size in the player might be worth the lack of storage space)
  • 20GB iPod Photo: This has pretty much everything I'm looking for, and then some. By all accounts, it's a well engineered and designed piece of work, and everyone I know who has one loves it. 20GB is a good size, and it allows photos and other file storage, which could be useful. It also has some productivity software like a calendar and todo list, which is nice (depending on how it works). I think one other big consideration when it comes to the iPod is the simple network effect: so many people have iPods that there is a good market for quality accessories. This is important, because I'm looking to use my player in the car, which would require some accessories.
  • Cowon iAudio X5: This one has almost everything the iPod has, and some interesting features. It comes in 20GB and 30GB in comparable prices, and it has some advanced functionality that includes an FM Tuner and even the ability to watch video. However, there are some things that appear to be lacking when compared to the iPod. For instance, it requires an adapter to attach AC, line-in, and USB cables. The controls and design seem nice, but it doesn't seem like anyone can approach the iPod on design. Still, it definitely seems like the best alternative to the iPod out there, and it looks great on paper (though I guess there's still a nagging question of how it will perform in practice). I don't know much about the accessories that are available, but they seem somewhat less complete than what's available for the iPod.
  • Creative Zen Touch: This player is comparable to the iPod Photo and the X5, but anecdotal evidence from a friend makes me want to stay away from this one. This seems like a good case of a player that looks good on paper, but doesn't work as well in practice.
One thing the iPod really has going for it is that I've actually used it and I like it. It's well designed and elegant, which is why I'm leaning towards it. Any advice on the subject would be appreciated, however, as I'm certainly no expert.
Posted by Mark on September 18, 2005 at 08:28 PM .: Comments (2) | link :.

End of This Day's Posts

Sunday, August 21, 2005

Mastery II
I'm currently reading Vernor Vinge's A Deepness in the Sky. It's an interesting novel, and there are elements of the story that resemble Vinge's singularity. (Potential spoilers ahead) The story concerns two competing civilizations that travel to an alien planet. Naturally, there are confrontations and betrayals, and we learn that one of the civilizations utilizes a process to "Focus" an individual on a single area of study, essentially turning them into a brilliant machine. Naturally, there is a lot of debate about the Focused, and in doing so, one of the characters describes it like this:
... you know about really creative people, the artists who end up in your history books? As often as not, they're some poor dweeb who doesn't have a life. He or she is just totally fixated on learning everything about some single topic. A sane person couldn't justify losing friends and family to concentrate so hard. Of course, the payoff is that the dweeb may find things or make things that are totally unexpected. See, in that way, a little of Focus has always been part of the human race. We Emergents have simply institutionalized this sacrifice so the whole community can benefit in a concentrated, organized way.
Debate revolves around this concept because people living in this Focused state could essentially be seen as slaves. However, the quote above reminded me of a post I wrote a while ago called Mastery:
There is an old saying "Jack of all trades, Master of none." This is indeed true, though with the demands of modern life, we are all expected to live in a constant state of partial attention and must resort to drastic measures like Self-Censorship or information filtering to deal with it all. This leads to an interesting corollary for the Master of a trade: They don't know how to do anything else!
In that post, I quoted Isaac Asimov, who laments that he's clueless when it comes to cars, and relates a funny story about what happened when he once got a flat tire. I wondered if that sort of mastery was really a worthwhile goal, but the artificually induced Focus in Vinge's novel opens the floor up to several questions. Would you volunteer to be focused in a specific area of study, knowing that you would basically do that and only that? No family, no friends, but only because you are so focused on your studies (as portrayed in the novel, doing work in your field is what makes you happy). What if you could opt to be focused for a limited period of time?

There are a ton of moral and ethical questions about the practice, and as portrayed in the book, it's not a perfect process and may not be reversible (at least, not without damage). The rewards would be great - Focusing sounds like a truly astounding feat. But would it really be worth it? As portrayed in the book, it definitely would not, as those wielding the power aren't very pleasant. Because the Focused are so busy concentrating on their area of study, they become completely dependent on the non-Focused to guide them (it's possible for a Focused person to become too-obsessed with a problem, to the point where physical harm or even death can occur) and do everything else for them (i.e. feed them, clean them, etc...) Again, in the book, those who are guiding the Focused are ruthless exploiters. However, if you had a non-Focused guide who you trusted, would you consider it?

I still don't know that I would. While the results would surely be high quality, the potential for abuse is astounding, even when it's someone you trust that is pulling the strings. Nothing says they'll stay trustworthy, and it's quite possible that they could be replaced in some way by someone less trustworthy. If the process was softened to the point where the Focused retains at least some control over their focus (including the ability to go in and out), then this would probably be a more viable option. Fortunately, I don't see this sort of thing happening in the way proposed by the book, but other scenarios present interesting dilemmas as well...
Posted by Mark on August 21, 2005 at 09:25 PM .: Comments (0) | link :.

End of This Day's Posts

Sunday, July 03, 2005

Alien Invasions
Steven Spielberg's War of the Worlds is a pretty tense affair. The director knows how to lay on the suspense and he certainly applies that knowledge liberally in the film. It's a good thing too, because when he allows a short breather, your mind immediately starts asking questions that can only have embarrassingly illogical answers.

Luckily, Spielberg's version of the infamous H.G. Wells novel focuses on one character, not the big picture of the story. This relegates the aliens in the film to a MacGuffin, a mostly unexplained excuse to place pressure on the protagonist Ray Ferrier (played competently by Tom Cruise). In this respect, it resembles M. Night Shyamalan's Signs more than other recent big budget disaster films like Independence Day. Its pacing and relentless tension make the film feel more like horror than science fiction. Unfortunately, there's enough pseudo-explanations and speculations about the aliens to strain the suspension of disbelief that is required for this film to work. I've found that I generally have more movie-going goodwill than others (i.e. letting art be art), so I didn't mind the lack of details and even some of the odd quirky logic that seems to drive the plot, which really focuses on the aforementioned Ray's relationship with his kids (and not the aliens). Ultimately, there's nothing special about the story, but in the hands of someone as proficient as Speilberg, it works well enough for me. It's visually impressive and quite intense.

Besides, it's not like the concept itself makes all that much sense. In 1898, Wells' novel was probably seen as somewhat realistic, though the Martians-as-metaphor themes didn't escape anyone. In 1938, Orson Welles's infamous radio broadcast of the story scared the hell out of listeners who thought that an actual invasion was occurring. Today, the concept of an advanced alien civilization invading earth has lost much of its edge, perhaps because we understand the science of such a scenario much better than we used to. If you're able to put aside the nagging questions, it still holds a certain metaphorical value, but even that is starting to get a little old.

No explicit motivation is attributed to the aliens in Spielberg's film, but in other stories it generally comes down to the aliens' lust for resources ("They're like locusts. They're moving from planet to planet... their whole civilization. After they've consumed every natural resource they move on..."). This, of course, makes no sense.

Space is big. Huge. From what we know of life in the universe, it appears to be quite rare and extremely spread out. Travel between civilizations may be possible due to something exotic like a wormhole or faster-than-light travel, but even if that were possible (and that's a big if), traversing the distances involved in the usually huge and powerful alien craft is still bound to expend massive amounts of energy. And for what? Resources? What kinds of resources? Usually "resources" is code for energy, but that doesn't make much sense to me. They'd have to have found something workable (perhaps fusion) just to make the trip to Earth, right? In the miniseries V the aliens are after water, which is an impressively ignorant motivation (hydrogen and oxygen are among the universe's most abundant elements and water itself has been observed all over our galaxy). Perhaps the combination of water, mineral resources, a temperate climate, a protective and varied atmosphere, animal and plant life, and relatively stable ecosystems would make Earth a little more attractive.

What else makes Earth so special? There would have to be some sort of resource we have that most other planets don't. Again, Earth is one of the rare planets capable of supporting life, but we can infer that they're not looking for life itself (their first acts invariably include an attempt to exterminate all life they come accross. In War of the Worlds, the Alien tripods start by vaporizing every human they see. Later in the film, we see them sort of "eating" humans. This is a somewhat muddled message, to say the least). And whatever this resource is, it would have to justify risking a war with an indigenous intelligent life form. Granted, we probably wouldn't stand much of a chance against their superior technology, but at the very least, our extermination would require the expenditure of yet more energy (further discrediting the notion that what the aliens are after is an energy source). Plus, it's not like we've left the planet alone - we're busy using up the resources ourselves. Also, while our weapons may be no match for alien defenses, they'd be quite sufficent to destroy much of the planet's surface out of spite, rendering the alien invasion moot.

The only thing that even approaches making any sort of sense is that they want Earth as a new home for themselves. As one of the few planets capable of supporting life, I suppose it could be valuable in that respect. Indeed, in Wells' novel, the Martians attacked earth because their planet was dying. Spielberg's film seems determined to kinda-sorta keep true to the novel, except that the aliens appear to have planned this countless years ago, which makes it seem less likely. But again, why risk invading an already inhabited planet? Some stories have emphasized that the aliens were doing their equivalent of terraforming (this is implied in War of the Worlds when Ray looks out over a bizarrely changed landscape filled with red weeds), which is a good idea, but it still doesn't explain why Earth would be a target. From all appearances, there are plenty of empty planets out there...

So the concept itself is a bit tired to start with. Movies that aren't explicit invasions involving a civilization like our own fare a little better. Alien & Aliens do a good job of this, as have several other films.

In any case, War of the Worlds is still a reasonably good watch, so long as you don't mind the lack of scientific rigor. It's a visually impressive film, with a number of sequences that stand out. And he really doesn't give you all that much time to think about all the flaws...
Posted by Mark on July 03, 2005 at 10:56 AM .: Comments (3) | link :.

End of This Day's Posts

Sunday, April 10, 2005

Cell Phone Update
Because I know everyone is on the edge of their seat after last week's entry, I ended up going with the Nokia 3120. It's compact, light and has a reasonably long talk time. As far as talk time goes, the Sony Ericsson T237 seems to be king (at least, going by the statistics), but I didn't like the keypad (nor did I particularly love the screen or the controls). The Nokia was better in this respect, and I've always been happy with Nokia phones.

It's a bit of a low end phone, but the high end phones don't seem to have gotten to a point where it's really worth it just yet. The Sony Ericsson W800i seems really interesting. I'm in the market for an MP3 player as well, so it would be really nice to get that functionality with the phone. The cameras in phones are getting better and better as well (to the point where they're better than my digital camera, which is getting pretty old). Hitting three birds with one stone would be really nice, but unfortunately, the W800i isn't out yet (and some are reporting that it won't be released in the States at all), would probably cost a fortune even if it was available, and I'm sure that better models will eventually become available anyway, which is why I don't mind getting the low end model now...

Anyway, thanks for everyone's help. It was very... helpful. Um, yeah. Thanks.
Posted by Mark on April 10, 2005 at 07:22 PM .: link :.

End of This Day's Posts

Sunday, April 03, 2005

Cell Phone
So I'm in the market for a new cell phone. I'm no expert, but I've been reading up on the subject this weekend. I actually use my cell phone as my primary phone (I don't have a land line), so I might consider going for something other than a base model... but it seems that more advanced phones are loaded with features that I don't really need. What I really want out of the new phone is:
  • Strong Battery Life - This seems important since I'm going to be using it as my primary phone.
  • Call Quality - Again, this is important because I'm using it as my primary phone.
  • Size and Weight - I carry my phone with me wherever I go, and I usually keep it in my pocket. This seems ideally suited to a flip-phone, as they are small and the shape prevents accidental dialing. But I've never much cared for flip-phones (see next bullet), so what I'd really like is a small, light, candy bar style phone.
  • Usability - Stuff like navigation through the menus, button controls (including size, shape, placement, etc...), and how the phone feels in my hand and against my face are important. This is where flip-phones normally fail for me, but I'm trying to keep an open mind... If I do end up seriously considering a flip-phone, it will need to have an external screen with caller ID, so I can see who's calling without having to answer.
Most other features are nice-to-haves, but not by any means necessary for me. A quick rundown of features and my thoughts:
  • Text messaging, instant messaging, and email - I'd definitely like Text Messaging, but IM and email aren't a necessity.
  • Camera - Would be nice to have, but not that important to me.
  • Speakerphone - Again, nice to have, but not very important to me.
  • Wireless/Bluetooth/Infrared/Connectivity - It would be nice to backup all my data on my computer, but I don't absolutely need wireless and I wouldn't be upset without any sort of connectivity at all. Internet access would be nice, but isn't really necessary.
  • Sound - I could really care less about ring-tones, and though it would be nice to knock out two birds with one stone by getting an MP3 player in the phone, I don't think the technology is there (nor am I really willing to pay for it - still, this Sony Ericsson W800i sounds pretty darn cool).
  • Games/Downloads - Don't really care at all. It's nice to have a game or two on the phone, but I really don't care much.
  • Style - Looks aren't that important to me. I'm not a big fan of glitzy designs or anything, so simple and to-the-point is what I'm looking for. It would be nice to have a good looking phone, but it's not essential.
  • Smart-Phones - Don't really need this either. I suppose, in the future, this will be the way to go, but I don't want to be that connected just yet (though if I ever do end up getting a blackberry type device, I would want it to also be a cell phone and probably an MP3 player as well).
I'm really just looking for something basic that I can carry around easily and reliably make calls with for a long period of time without needing to recharge the phone. I'll probably want ext messaging and email as well. Most everything else is desirable, but not really needed either. I'm on a budget here, so I don't want to pay extra for a whole buch of features I'm not going to use...

I'm not sure which provider I'm going to go with either, but I'll have to see what my options are. My employer had a deal with AT&T Wireless, so that is what I have now, but AT&T is now Cingular, so I'm not sure if that relationship still exists (or if we switched to something else). I would prefer a CDMA based phone, but several friends have had bad experiences with Sprint and Verizon is a little too expensive for me, especially if I can get a good deal with Cingular (which uses GSM).

In looking at the phones available for Cingular, I'm not especially fond of any available options. The closest thing to what I want is the Sony Ericsson T237 or the Nokia 3120. Both are pretty low end models, but it seems like the big differences in the next steps up are the extraneous features I don't really need (like the camera, Bluetooth, etc...) As of right now, I'm leaning towards the Sony Ericsson T237 (or the Sony Ericsson T637, which is nicer, but is also more expensive and has lots of features I don't especially need). It's nice and small, it apparently has a fantastic battery life, and decent call quality. Most reviews I've seen give it reasonable marks and recommend it as a good no-frills phone. Some user reviews give it pretty bad marks though, which is why I'm considering the T637 (despite it's extra features).

Of course, I'll need to look at these things in the store before I really make my decision, but any advice on cell-phone buying would be much appreciated. I haven't really looked into Verizon phones yet, but I'm going to give it consideration...

Update: In researching and thinking about this a little more, I think some of the more feature-rich phones might be worth considering, despite my initial distaste. So for now, the front-runner is the T637. We shall see. Suggestions or advice still welcome...
Posted by Mark on April 03, 2005 at 04:35 PM .: link :.

End of This Day's Posts

Sunday, March 27, 2005

Accelerating Change
Slashdot links to a fascinating and thought provoking one hour (!) audio stream of a speech "by futurist and developmental systems theorist, John Smart." The talk is essentially about the future of technology, more specifically information and communication technology. Obviously, there is a lot of speculation here, but it is interesting so long as you keep it in the "speculation" realm. Much of this is simply a high-level summary of the talk with a little commentary sprinkled in.

He starts by laying out some key motivations or guidelines to thinking about this sort of thing, and he paraphrases David Brin (and this is actually paraphrasing Smart):
We need a pragmatic optimism, a can-do attitude, a balance between innovation and preservation, honest dialogue on persistent problems, ... tolerance of the imperfect solutions we have today, and the ability to avoid both doomsaying and a paralyzing adherence to the status quo. ... Great input leads to great output.
So how do new systems supplant the old? They do useful things with less matter, less energy, and less space. They do this until they reach some sort of limit along those axes (a limitation of matter, energy, or space). It turns out that evolutionary processes are great at this sort of thing.

Smart goes on to list three laws of information and communication technology:
  1. Technology learns faster than you do (on the order of 10 million times faster). At some point, Smart speculates that there will be some sort of persistent Avatar (neural-net prosthesis) that will essentially mimic and predict your actions, and that the "thinking" it will do (pattern recognitions, etc...) will be millions of times faster than what our brain does. He goes on to wonder what we will look like to such an Avatar, and speculates that we'll be sort of like pets, or better yet, plants. We're rooted in matter, energy, and space/time and are limited by those axes, but our Avatars will have a large advantage, just as we have a large advantage over plants in that respect. But we're built on top of plants, just as our Avatars will be built on top of us. This opens up a whole new can of worms regarding exactly what these Avatars are, what is actually possible, and how they will be perceived. Is it possible for the next step in evolution to occur in man-made (or machine-made) objects? (This section is around 16:30 in the audio)
  2. Human beings are catalysts rather than controllers. We decide which things to accelerate and which to slow down, and this is tremendously important. There are certain changes that are evolutionarily inevitable, but the path we take to reach those ends is not set and can be manipulated. (This section is around 17:50 in the audio)
  3. Interface is extremely important and the goal should be a natural high-level interface. His example is calculators. First generation calculators simply automate human processes and take away your math skills. Second generation calculators like Mathematica allow you to get a much better look at the way math works, but the interface "sucks." Third generation calculators will have a sort of "deep, fluid, natural interface" that allows a kid to have the understanding of a grad student today. (This section is around 20:00 in the audio)
Interesting stuff. His view is that most social and technological advances of the last 75 years or so are more accelerating refinements (changes in the microcosm) rather than disruptive changes (changes in the macrocosm). Most new technological advances are really abstracted efficiencies - it's the great unglamorous march of technology. They're small and they're obfuscated by abstraction, thus many of the advances are barely noticed.

This about halfway through the speech, and he goes on to list many examples and he explores some more interesting concepts. Here are some bits I found interesting.
  • He talks about transportation and energy, and he argues that even though, on a high level we haven't advanced much (still using oil, natural gas - fossil fuels), there has actually been a massive amount of change, but that the change is mostly hidden in abstracted accelerating efficiencies. He mentions that we will probably have zero-emission fossil fuel vehicles 30-40 years from now (which I find hard to believe) and that rather than focusing on hydrogen or solar, we should be trying to squeeze more and more efficiency out of existing systems (i.e. abstracted efficiencies). He also mentions population growth as a variable in the energy debate, something that is rarely done, but if he is correct that population will peak around 2050 (and that population density is increasing in cities), then that changes all projections about energy usage as well. (This section is around 31:50-35 in the audio) He talks about hybrid technologies and also autonomous highways as being integral in accelerating efficiencies of energy use (This section is around 37-38 in the audio) I found this part of the talk fascinating because energy debates are often very myopic and don't consider things outside the box like population growth and density, autonomous solutions, phase shifts of the problem, &c. I'm reminded of this Michael Crichton speech where he says:
    Let's think back to people in 1900 in, say, New York. If they worried about people in 2000, what would they worry about? Probably: Where would people get enough horses? And what would they do about all the horseshit? Horse pollution was bad in 1900, think how much worse it would be a century later, with so many more people riding horses?
    None of which is to say that we shouldn't be pursuing alternative energy technology or that it can't supplant fossil fuels, just that things seem to be trending towards making fossil fuels more efficient. I see hybrid technology becoming the major enabler in this arena, possibly followed by the autonomous highway (that controls cars and can perhaps give an extra electric boost via magnetism). All of which is to say that the future is a strange thing, and these systems are enormously complex and are sometimes driven by seemingly unrelated events.
  • He mentions an experiment in genetic algorithms used for process automation. Such evolutionary algorithms are often used in circuit design and routing processes to find the most efficient configuration. He mentions one case where someone made a mistake in at the quantum level of a system, and when they used the genetic algorithm to design the circuit, they found that the imperfection was actually exploited to create a better circuit. These sorts of evolutionary systems are robust because failure actually drives the system. It's amazing. (This section is around 47-48 in the audio)
  • He then goes on to speculate as to what new technologies he thinks will represent disruptive change. The first major advance he mentions is the development of a workable LUI - a language-based user interface that utilizes a natural language that is easily understandable by both the average user and the computer (i.e. a language that doesn't require years of study to figure out, a la current programming languages). He thinks this will grow out of current search technologies (perhaps in a scenario similar to EPIC). One thing he mentions is that the internet right now doesn't give an accurate represtenation of the wide range of interests and knowledge that people have, but that this is steadily getting better over time. As more and more individuals, with more and more knowledge, begin interacting on the internet, they begin to become a sort of universal information resource. (This section is around 50-53 in the audio)
  • The other major thing he speculates about is the development of personality capture and parallel computing, which sort of integrates with the LUI. This is essentially the Avatar I mentioned earlier which mimics and predicts your actions.
As always, we need to keep our feet on the ground here. Futurists are fun to listen to, but it's easy to get carried away. The development of a LUI and a personality capture system would be an enormous help, but we still need good information aggregation and correlation systems if we're really going to progress. Right now the problem is finding the information we need, and analyzing the information. A LUI and personality capture system will help with the finding of information, but not so much with the analysis (the separating of the signal from the noise). As I mentioned before, the speech is long (one hour), but it's worth a listen if you have the time...
Posted by Mark on March 27, 2005 at 08:40 PM .: link :.

End of This Day's Posts

Sunday, May 02, 2004

The Unglamorous March of Technology
We live in a truly wondrous world. The technological advances over just the past 100 years are astounding, but, in their own way, they're also absurd and even somewhat misleading, especially when you consider how these advances are discovered. More often than not, we stumble onto something profound by dumb luck or by brute force. When you look at how a major technological feat was accomplished, you'd be surprised by how unglamorous it really is. That doesn't make the discovery any less important or impressive, but we often take the results of such discoveries for granted.

For instance, how was Pi originally calculated? Chris Wenham provides a brief history:
So according to the Bible it's an even 3. The Egyptians thought it was 3.16 in 1650 B.C.. Ptolemy figured it was 3.1416 in 150 AD. And on the other side of the world, probably oblivious to Ptolemy's work, Zu Chongzhi calculated it to 355/113. In Bagdad, circa 800 AD, al-Khwarizmi agreed with Ptolemy; 3.1416 it was, until James Gregory begged to differ in the late 1600s.

Part of the reason why it was so hard to find the true value of Pi (π) was the lack of a good way to precisely measure a circle's circumference when your piece of twine would stretch and deform in the process of taking it. When Archimedes tried, he inscribed two polygons in a circle, one fitting inside and the other outside, so he could calculate the average of their boundaries (he calculated ? to be 3.1418). Others found you didn't necessarily need to draw a circle: Georges Buffon found that if you drew a grid of parallel lines, each 1 unit apart, and dropped a pin on it that was also 1 unit in length, then the probability that the pin would fall across a line was 2/π. In 1901, someone dropped a pin 34080 times and got an average of 3.1415929.
π is an important number and being able to figure out what it is has played a significant factor in the advance of technology. While all of these numbers are pretty much the same (to varying degrees of precision), isn't it absurd that someone figured out π by dropping 34,000 pins on a grid? We take π for granted today; we don't have to go about finding the value of π, we just use it in our calculations.

In Quicksilver, Neal Stephenson portrays several experiments performed by some of the greatest minds in history, and many of the things they did struck me as especially unglamorous. Most would point to the dog and bellows scene as a prime example of how unglamorous the unprecedented age of discovery recounted in the book really was (and they'd be right), but I'll choose something more mundane (page 141 in my edition):
"Help me measure out three hundred feet of thread," Hooke said, no longer amused.

They did it by pulling the thread off of a reel, and stretching it alongside a one-fathom-long rod, and counting off fifty fathoms. One end of the thread, Hooke tied to a heavy brass slug. He set the scale up on the platform that Daniel had improvised over the mouth of the well, and put the slug, along with its long bundle of thread, on the pan. He weighed the slug and thread carefully - a seemingly endless procedure disturbed over and over by light gusts of wind. To get a reliable measurement, they had to devote a couple of hours to setting up a canvas wind-screen. Then Hooke spent another half hour peering at the scale's needle through a magnifying lens while adding or subtracting bits of gold foil, no heavier than snowflakes. Every change caused the scale to teeter back and forth for several minutes before settling into a new position. Finally, Hooke called out a weight in pounds, ounces, grains, and fractions of grains, and Daniel noted it down. Then Hooke tied the free end of the thread to a little eye he had screwed on the bottom of the pan, and he and Daniel took turns lowering the weight into the well, letting it drop a few inches at a time - if it got to swinging, and scraped against the chalky sides of the hole, it would pick up a bit of extra weight, and ruin the experiment. When all three hundred feet had been let out, Hooke went for a stroll, because the weight was swinging a little bit, and its movements would disturb the scale. Finally, it settled down enough that he could go back to work with his magnifying glass and his tweezers.
And, of course, the experiment was a failure. Why? The scale was not precise enough! The book is filled with similar such experiments, some successful, some not.

Another example is telephones. Pick one up, enter a few numbers on the keypad and voila! you're talking to someone halfway across the world. Pretty neat, right? But how does that system work, behind the scenes? Take a look at the photo on the right. This is a typical intersection in a typical American city, and it is absolutely absurd. Look at all those wires! Intersections like that are all over the world, which is the part of the reason I can pick up my phone and talk to someone so far away. One other part of the reason I can do that is that almost everyone has a phone. And yet, this system is perceived to be elegant.

Of course, the telephone system has grown over the years, and what we have now is elegant compared to what we used to have:
The engineers who collectively designed the beginnings of the modern phone system in the 1940's and 1950's only had mechanical technologies to work with. Vacuum tubes were too expensive and too unreliable to use in large numbers, so pretty much everything had to be done with physical switches. Their solution to the problem of "direct dial" with the old rotary phones was quite clever, actually, but by modern standards was also terribly crude; it was big, it was loud, it was expensive and used a lot of power and worst of all it didn't really scale well. (A crossbar is an N� solution.) ... The reason the phone system handles the modern load is that the modern telephone switch bears no resemblance whatever to those of 1950's. Except for things like hard disks, they contain no moving parts, because they're implemented entirely in digital electronics.
So we've managed to get rid of all the moving parts and make things run more smoothly and reliably, but isn't it still an absurd system? It is, but we don't really stop to think about it. Why? Because we've hidden the vast and complex backend of the phone system behind innocuous looking telephone numbers. All we need to know to use a telephone is how to operate it (i.e. how to punch in numbers) and what number we want to call. Wenham explains, in a different essay:
The numbers seem pretty simple in design, having an area code, exchange code and four digit number. The area code for Manhattan is 212, Queens is 718, Nassau County is 516, Suffolk County is 631 and so-on. Now let's pretend it's my job to build the phone routing system for Emergency 911 service in the New York City area, and I have to route incoming calls to the correct police department. At first it seems like I could use the area and exchange codes to figure out where someone's coming from, but there's a problem with that: cell phone owners can buy a phone in Manhattan and get a 212 number, and yet use it in Queens. If someone uses their cell phone to report an accident in Queens, then the Manhattan police department will waste precious time transferring the call.

Area codes are also used to determine the billing rate for each call, and this is another way the abstraction leaks. If you use your Manhattan-bought cell phone to call someone ten yards away while vacationing in Los Angeles, you'll get charged long distance rates even though the call was handled by a local cell tower and local exchange. Try as you might, there is no way to completely abstract the physical nature of the network.
He also mentions cell phones, which are somewhat less absurd than plain old telephones, but when you think about it, all we've done with cell phones is abstract the telephone lines. We're still connecting to a cell tower (which need to be placed with high frequency throughout the world) and from there, a call is often routed through the plain old telephone system. If we could see the RF layer in action, we'd be astounded; it would make the telephone wires look organized and downright pleasant by comparison.

The act of hiding the physical nature of a system behind an abstraction is very common, but it turns out that all major abstractions are leaky. But all leaks in an abstraction, to some degree, are useful.

One of the most glamorous technological advances of the past 50 years was the advent of space travel. Thinking of the heavens is indeed an awe-inspiring and humbling experience, to be sure, but when you start breaking things down to the point where we can put a man in space, things get very dicey indeed. When it comes to space travel, there is no more glamorous a person than the astronaut, but again, how does one become an astronaut? The need to pour through and memorize giant telephone-sized books filled with technical specifications and detailed schematics. Hardly a glamorous proposition.

Steven Den Beste recently wrote a series of articles concerning the critical characteristics of space warships, and it is fascinating reading, but one of the things that struck me about the whole concept was just how unglamorous space battles would be. It sounds like a battle using the weapons and defenses described would be punctuated by long periods of waiting followed by a short burst of activity in which one side was completely disabled. This is, perhaps, the reason so many science fiction movies and books seem to flaunt the rules of physics. As a side note, I think a spectacular film could be made while still obeying the rules of physics, but that is only because we're so used to the absurd physics defying space battles.

None of this is to say that technological advances aren't worthwhile or that those who discover new and exciting concepts are somehow not impressive. If anything, I'm more impressed at what we've achieved over the years. And yet, since we take these advances for granted, we marginalize the effort that went into their discovery. This is due in part to the necessary abstractions we make to implement various systems. But when abstractions hide the crude underpinnings of technology, we see that technology and its creation as glamorous, thus bestowing honors upon those who make the discovery (perhaps for the wrong reasons). It's an almost paradoxal cycle. Perhaps because of this, we expect newer discoveries and innovations to somehow be less crude, but we must realize that all of our discoveries are inherently crude.

And while we've discovered a lot, it is still crude and could use improvements. Some technologies have stayed the same for thousands of years. Look at toilet paper. For all of our wondrous technological advances, we're still wiping our ass with a piece of paper. The Japanese have the most advanced toilets in the world, but they've still not figured out a way to bypass the simple toilet paper (or, at least, abstract the process). We've got our work cut out for us. Luckily, we're willing to go to absurd lengths to achieve our goals.
Posted by Mark on May 02, 2004 at 09:47 PM .: link :.

End of This Day's Posts

Wednesday, April 21, 2004

Shields Up!
Steven Den Beste has a fascinating post about the critical characteristics of space warships. He approaches the question from a realistic angle, mostly relying on current technology, only extrapolating reasonable advances. He rules out the sci-fi stuff ("hyperspace," "subspace," "leap cannon," etc...) right from the start, and a few things struck me while reading it.

This post will deal with one of the things that he has (reasonably) decided not to include in his discussion: energy shields. I'm doing this mostly as a thought exercise. I've found that writing about a subject helps me learn about it, and this is something I'd like to know more about. That said, I don't know how conclusive this post will be. As it stands now, the post will raise more questions than it answers. Another post will deal with a subject I've been thinking about a lot lately, which is how unglamorous technological advance can be, and how space battles might be a good example. It sounds like a battle using the weapons and defenses described would be punctuated by long periods of waiting followed by a short burst of activity in which one side was completely disabled. There is a reason why science fiction films flaunt the rules of physics. But that is another topic for another post.

Once he discards the useless physics-defying science fiction inventions, Den Beste goes on to list a number of possible weapons, occasionally mentioning defense systems. Given that I'll be focusing on defense systems, it's worth noting the types of attacks that will need to be repelled. Here is a basic list of weapons for use in a space battle:
  • Lasers
  • Masers (Similar to lasers, but operating at microwave frequencies)
  • Particle Beams
  • Missiles (with a variety of warheads)
  • "Dumb" Projectiles
Strangely enough, I recently came across the concept of cold plasma, which may be able to shed some light on how to defend against the weapons Den Beste laid out. Cold plasma in the quantities and density required to repell attacks is not yet technologically feasible, and articles like this aren't always reliable (sometimes exaggerating the effects of new technology).

Plasma is basically a collection of molecules, atoms, electrons and positively charged ions, and it makes up 99% of the known universe. Hot plasma is present in the sun - at high temperatures hydrogen nuclei can fuse into heavier nuclei despite a mutual electric repulsion. When these particles collide in the sun, they aquire enough energy to fuse, and release a tremendous amount of energy. Unfortunately, hot plasmas are not of much use for defensive purposes, as the temperatures are too high, and would be destructive.

Colder plasmas, however, would do the trick. A plasma's charged particles interact constantly, creating localized attractions or repulsions. An external energy attack, from weapons such as lasers, high powered microwave bursts, or particle beams, would theoretically be caught up in the plasma's complex electromagnetic fields and dissapated or deflected. If the plasma could be made sufficiently dense, it could even deflect missiles and other projectiles. The process of absorbing and dissapating energy could also go a long way into defeating radar... but as Den Beste noted, IR detectors would be the primary sensor used in space, so this sort of "cloaking" ability would be of limited use.

Interestingly, such a cold plasma shield could also be applied to projectiles such as missiles, shielding them from the defensive measures Den Beste thinks would be used against them.

Unfortunately, cold plasma requires a lot of energy to produce. And since I can't seem to find an adequate explanation of what cold plasma really is or, rather, how it is produced, the use of cold plasma brings up a number of questions. My primary concern has to do with the energy needed to produce cold plasma, and how the excess heat would be dissapated. Den Beste notes:
Warships will be hot and will have to shed a lot of heat in order to avoid destroying themselves.

There are lot of ways of getting rid of waste heat, and convection is by far the easiest and most convenient. It's what cars use, and what nuclear power plants use, and what our bodies use. A fan moves air past the radiator of a car, and since the radiator is warmer than the air, it is cooled and the air is warmed. The cooling tower of a nuclear reactor sheds heat into cold water, boiling it and turning it into water vapor which is dispersed into the atmosphere. Our bodies shed heat in expelled breath, and through our skins into the air, sometimes aided by sweat.

Unfortunately, in space there's no atmosphere to convect heat into, and you have to rely on radiation.
Now, you've created a cold plasma force field around your spacecraft that could theoretically deflect electromagnetic attacks from weapons like lasers, masers, and particle beams, but what about the heat produced on your own ship? How would heat interact with the cold plasma? Would the plasma absorb the heat? If it did, wouldn't you saturate the plasma shield (after all, you'd be producing an awful lot of heat even without the massive amount of energy needed to set up the plasma field, and when you add that, couldn't you overload it)? If you surrounded your ship, how would the heat escape? Exposing the radiator would defeat the purpose of having a shield in the first place, as the radiator would be one of the primary targets.

Well, perhaps I've figured out why Den Beste ruled out energy shields in the first place. Sorry if this seemed like a waste of time, but I found it at least somewhat interesting, even if it wasn't conclusive. And I've also found a new respect for the type of theoretical discussions Den Beste is so good at... Stay tuned for a more general (and hopefully more interesting) discussion on the unglamorous march of technology.

Update: Buckethead has an excellent series of 4 posts on War in Space (one, two, three, four). I am clearly outclassed. One of these days I'll crank out that post about the unglamorous side of technology advancement, but for now, I'll leave the technical aspects in the capable hands of Den Beste and Buckethead...
Posted by Mark on April 21, 2004 at 08:21 PM .: link :.

End of This Day's Posts

Sunday, March 28, 2004

USA Today has a fascinating look inside an interesting CIA initiative:
...In-Q-Tel is the venture-capital arm of the CIA.

That's right: The CIA is investing in tech start-ups. At a time when the CIA has come under fire for intelligence lapses, In-Q-Tel offers a promising path to technology that might help the agency spot trouble sooner and make fewer errors.

In-Q-Tel, set up in 1999, invests about $35 million a year in young companies creating technology that might improve the ability of the United States to spy on its nemeses. It has kept a low profile and is not much known outside of the intelligence community and Silicon Valley.
The program has apparently been very successful, and will most likely be renewed. The DoD has expressed interest in duplicating the model for their own purposes.

Despite it's name being inspired by James Bond's Q, In-Q-Tel doesn't seem to be investing in high-tech weaponry or spy gadgets. Their focus seems to run more towards finding, sorting and communicating data. Products range from an application that can translate documents from Arabic into English, to an advanced Google-like search engine, to weblogging software(!). Public/private partnerships aren't very common in the US, but there are some exceptions, and in this case, it looks like it was a good idea.
...Tenet explained that the CIA and government labs had always been on the leading edge of tech. But the Internet boom poured so much money into tech start-ups, the start-ups leapt ahead of the CIA. And scientists and technologists who had innovative ideas went off to be entrepreneurs and get rich ? they didn't want government salaries at the CIA.

At the same time, tech companies were booming and didn't want the hassle of dealing with the government's procurement process. Most never thought of contacting the CIA. Tech companies didn't know what the CIA might need, and the CIA had no idea what the tech companies were inventing ? a dangerous disconnect with lives on the line.
Of course, the public/private and somewhat low profile nature of the program makes for some strange rumors:
In-Q-Tel has become known for being thorough yet furtive. These days, when a young company is making a presentation at an event, an unknown man or woman might come in, listen intently, then disappear. Such is In-Q-Tel's mystique that entrepreneurs often believe those are In-Q-Tel scouts even when they're not.
As I said before, the program has been successful (though success is measured in more than just money here - they're actually finding useful applications, and that's what the real goal is) but the CIA is characteristically cautious:
"It has far exceeded anything I could've hoped for when we had that first meeting," Augustine says. But he adds a note of caution, apropos for the CIA, which had been stuck for too long in old ways of finding new technology. "No idea is good forever," Augustine says. "We'll have to see how it holds up with time."
Update: Charles Hudson is a blogger who works for In-Q-Tel. Interesting.
Posted by Mark on March 28, 2004 at 04:58 PM .: link :.

End of This Day's Posts

Sunday, March 14, 2004

My New Toy
Pictured to the right is my new toy, a Pioneer DVR 106 DVD±RW Burner. I wanted to get a DVD drive for the computer so that I could do screen grabs for film reviews and scene analysis (for instance, it would help a great deal to have screenshots on my scene analysis of Rear Window), but when I looked into it, I found out that DVR drives were shockingly inexpensive. In fact, it cost approximately $100 less than my CD Burner (which I bought several years ago, when they hadn't yet become commonplace). For the record, a simple DVD ROM drive is also shockingly inexpensive, but the added functionality in a DVR drive seemed worth the price.
Posted by Mark on March 14, 2004 at 08:16 PM .: link :.

End of This Day's Posts

Sunday, February 15, 2004

Deterministic Chaos and the Simulated Universe
After several months of absence, Chris Wenham has returned with a new essay entitled 2 + 2. In it, he explores a common idea:
Many have speculated that you could simulate a working universe inside a computer. Maybe it wouldn't be exactly the same as ours, and maybe it wouldn't even be as complex, either, but it would have matter and energy and time would elapse so things could happen to them. In fact, tiny little universes are simulated on computers all the time, for both scientific work and for playing games in. Each one obeys simplified laws of physics the programmers have spelled out for them, with some less simplified than others.
As always, the essay is well done and thought provoking, exploring the idea from several mathematical angles. But it makes the assumption that the universe is both deterministic and infinitely quantifiable. I am certainly no expert on chaos theory, but it seems to me that it bears an importance on this subject.

A system is said to be deterministic if its future states are strictly dependant on current conditions. Historically, it was thought that all processes occurring in the universe were deterministic, and that if we knew enough about the rules governing the behavior of the universe and had accurate measurements about its current state we could predict what would happen in the future. Naturally, this theory has proven very useful in modeling real world events such as flying objects or the wax and wane of the tides, but there have always been systems which were more difficult to predict. Weather, for instance, is notoriously tricky to predict. It was always thought that these difficulties stemmed from an incomplete knowledge of how the system works or inaccurate measurement techniques.

In his essay, Wenham discusses how a meteorologist named Edward Lorenz stumbled upon the essence of what is referred to as chaos (or nonlinear dynamics, as it is often called):
Lorenz's simulation worked by processing some numbers to get a result, and then processing the result to get the next result, thus predicting the weather two moments of time into the future. Let's call them result1, which was fed back into the simulation to get result2. result3 could then be figured out by plugging result2 into the simulation and running it again. The computer was storing resultn to six decimal places internally, but only printing them out to three. When it was time to calculate result3 the following day, he re-entered result2, but only to three decimal places, and it was this that led to the discovery of something profound.

Given just an eentsy teensty tiny little change in the input conditions, the result was wild and unpredictable.
This phenomenon is called "sensitive dependence on initial conditions." For the systems in which we could successfully make good predictions (such as the path of a flying object), only a reasonable approximation of the initial state is necessary to make a reasonably accurate prediction. Sensitive dependence of a reasonable approximation of the initial state, however, yields unreasonable predictions. In a system exhibiting sensitive dependence, reasonable approximations of the initial state do not provide reasonable approximations of the future state.

So here comes the important part: For a chaotic system such as weather, in order to make useful long term predictions, you need measurements of initial conditions with infinite accuracy. What this means is that even a deterministic system, which in theory can be modeled by mathematical equations, can generate behavior which seems random and unpredictable. This manifests itself in nature all the time. Weather is the typical example, but there is also evidence that the human brain is also governed by deterministic chaos. Indeed, our brain's ability to generate seemingly unpredictable behavior is an important component of both survival and creativity.

So my question is, if it is not possible to quantify the initial conditions of a chaotic system with infinite accuracy, is that system really deterministic? In a sense, yes, even though it is impossible to calculate it:
Michaelangelo claimed the statue was already in the block of stone, and he just had to chip away the unnecessary parts. And in a literal sense, an infinite number of universes of all types and states should exist in thin air, indifferent to whether or not we discover the rules that exactly reveal their outcome. Our own universe could even be the numerical result of a mathematical equation that nobody has bothered to sit down and solve yet.

But we'd be here, waiting for them to discover us, and everything we'll ever do.
The answer might be there, whether we can calculate it or not, but even if it is, can we really do anything useful with it? In the movie Pi, a mathematician stumbles upon an enigmatic 216 digit number which is supposedly the representation of the infinite, the true name of God, and thus holds the key to deterministic chaos. But it's just a number, and no one really knows what to do with it, not even the mathematician who discovered it (though he could make accurate predictions on for the stock market, though he could not understand why and it came at a price). In the end, it drove him mad. I don't pretend to have any answers here, but I think the makers of Pi got it right.
Posted by Mark on February 15, 2004 at 02:33 PM .: link :.

End of This Day's Posts

Wednesday, January 28, 2004

JASON Lives!
Established in 1960, JASON is an independent scientific advisory group that provides consulting services to the U.S. government on matters of defense science and technology. Most of it's work is the product of an annual summer study and they have done work for the DOD (including DARPA), FBI, CIA and DOE. FAS recently collected and published several recent unclassified JASON studies on their website. They cover a wide area of subjects, ranging from quantum computing to nanotechnology to nuclear weapon maintenance. There is way too much material there to summarize, so here are just a few that cought my eye:
  • Counterproliferation January 1998 (3.3 MB .pdf): The first sentence: "Intelligence efforts should focus on humint collections as early as possible in the proliferation timeline and should continue such efforts throughout the proliferation effort." Note that this was written in January of 1998 and also note that this criticism is still being raised today.
  • Small Scale Propulsion: Fly on the Wall, Cockroach in the Corner, Rat in the Basement, Bird in the Sky September 1997 (1.2 MB .pdf): "This study concerns small vehicles on the battlefield, and in particular their propulsion. These vehicles may fly or travel on the ground by walking, rolling or hopping. Their purpose is to carry, generally covertly, a useful payload to a place inaccessible to man, or too dangerous for men, or in which a man or manned vehicle could not be covert." Unfortunately, things don't look to be going to well, as the technology required to create something like an "artificial vehicle as small and inconspicious as a fly or a cockroach" is still a long ways off. That was over 6 years ago, however, so things may have improved...
  • Data Mining and the Human Genome January 2000 (1.6 MB .pdf): Work on the Human Genome is shifting from the collection of data to the analysis of data. This study seeks to apply powerful data mining techniques developed in other fields to the Human Genome and the biological sciences.
  • Opportunities at the Intersection of Nanoscience, Biology and Computation November 2002 (5.0 MB .pdf): This seems to be a popular subject, and DARPA has several programs that seek to exploit this intersection of subjects. Applications include Brain Machine Interfaces and Biomolecular Motors (which, come to think of it, might help with the propulsion of those artificial vehicles as small and inconspicious as flies).
Interesting stuff.
Posted by Mark on January 28, 2004 at 08:13 PM .: link :.

End of This Day's Posts

Wednesday, January 21, 2004

NASA, Commercialization, and Agility
The Laughing Wolf comments on the "new" space initiative, paying particular attention to commercial interest in space... and the lack of any mention of commercialization in the new plan. He reads something into this which goes along with my thoughts on the institutional agility that will be necessary to make it to the moon and beyond.
You know, the President is not nearly as stupid as his critics try to portray him to be. In fact, he has been pretty shrewd and smart on many major issues. He may not be the best spoken person around, but he is not stupid. Do you think that he may have had some method to his madness here? For what if private industry does create and provide launch services? What if they do send probes on to the moon? Do you think that maybe NASA might, by dint of budget and language, be encouraged to make use of it? It is an intriguing possibility, since the actual language and such is not yet fully available, or perhaps even fully worked out.

Even if not, the timeline and scope provide ample opportunity for private space enterprise to prove its claims. The President has made his announcement and hit the button of his obligation here. He has honored the ideal that was NASA, and provided a cover to try to re-organize and re-focus the agency. In so doing, he has also effectively issued a challenge to the private sector: do it better and do it faster.

For if industry can, then there is the possibility of NASA having to use those services. If not, then the government can proceed on down the same tired path.
In my post on this subject, I didn't write about what the next big advance in space travel would be or who would create it, only that it would happen and that NASA would need to be agile enough to react to and exploit it. I noticed that the proposal didn't make any mention of commercial efforts, but I didn't pick up on the idea that the absense of such points was something of a challenge to the private sector.

Also, for more on the space effort, Jay Manifold has been blogging up a storm over at A Voyage To Arcturus. There is too much good stuff there to summarize, but if you're interested in this subject, check it out. Alright, one interesting thing I saw there was this conceptual illustration of a modular Crewed Exploration Vehicle. Of course as both Jay and the Laughing Wolf note, the CEV is meant to accompish many and varied goals, which means that while it may be versitile, it won't do any of its many tasks very well... but it is interesting nonetheless.
Posted by Mark on January 21, 2004 at 06:08 PM .: link :.

End of This Day's Posts

Sunday, January 18, 2004

To the Moon!
President Bush has laid out his vision for space exploration. Reaction has mostly been lukewarm. Naturally, there are opponents and proponents, but in my mind it is a good start. That we've changed focus to include long term manned missions on the Moon and a mission to Mars is a bold enough move for now. What is difficult is that this is a program that will span several decades... and several administrations. There will be competition and distractions. To send someone to Mars on the schedule Bush has set requires a consistent will among the American electorate as well. However, given the technology currently available, it might prove to be a wise move.

A few months ago, in writing about the death of the Galileo probe, I examined the future of manned space flight and drew a historical analogy with the pyramids. I wrote:
Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.

Such projects are not without their historical equivalents. There are all sorts of theories explaining why the ancient Egyptian pyramids were built, but none are as persuasive as the idea that they were built to unify Egypt's people and cultures. At the time, almost everything was being done on a local scale. With the possible exception of various irrigation efforts that linked together several small towns, there existed no project that would encompass the whole of Egypt. Yes, an insane amount of resources were expended, but the product was truly awe-inspiring, and still is today.

Those who built the pyramids were not slaves, as is commonly thought. They were mostly farmers from the tribes along the River Nile. They depended on the yearly cycle of flooding of the Nile to enrich their fields, and during the months that that their fields were flooded, they were employed to build pyramids and temples. Why would a common farmer give his time and labor to pyramid construction? There were religious reasons, of course, and patriotic reasons as well... but there was something more. Building the pyramids created a certain sense of pride and community that had not existed before. Markings on pyramid casing stones describe those who built the pyramids. Tally marks and names of "gangs" (groups of workers) indicate a sense of pride in their workmanship and respect between workers. The camaraderie that resulted from working together on such a monumental project united tribes that once fought each other. Furthermore, the building of such an immense structure implied an intense concentration of people in a single area. This drove a need for large-scale food-storage among other social constructs. The Egyptian society that emerged from the Pyramid Age was much different from the one that preceded it (some claim that this was the emergence of the state as we now know it.)

"What mattered was not the pyramid - it was the construction of the pyramid." If the pyramid was a machine for social progress, so too can the Space program be a catalyst for our own society.

Much like the pyramids, space travel is a testament to what the human race is capable of. Sure it allows us to do research we couldn't normally do, and we can launch satellites and space-based telescopes from the shuttle (much like pyramid workers were motivated by religion and a sense of duty to their Pharaoh), but the space program also serves to do much more. Look at the Columbia crew - men, women, white, black, Indian, Israeli - working together in a courageous endeavor, doing research for the benefit of mankind, traveling somewhere where few humans have been. It brings people together in a way few endeavors can, and it inspires the young and old alike. Human beings have always dared to "boldly go where no man has gone before." Where would we be without the courageous exploration of the past five hundred years? We should continue to celebrate this most noble of human spirits, should we not?
We should, and I'm glad we're orienting ourselves in this direction. Bush's plan appeals to me because of it's pragmatism. It doesn't seek to simply fly to Mars, it seeks to leverage the Moon first. We've already been to the Moon, but it still holds much value as a destination in itself as well as a testing ground and possibly even a base from which to launch or at least support our Mars mission. Some, however, see the financial side of things a little too pragmatic:
In its financial aspects, the Bush plan also is pragmatic -- indeed, too much so. The president's proposal would increase NASA's budget very modestly in the near term, pushing more expensive tasks into the future. This approach may avoid an immediate political backlash. But it also limits the prospects for near-term technological progress. Moreover, it gives little assurance that the moon-Mars program will survive the longer haul, amid changing administrations, economic fluctuations, and competition from voracious entitlement programs.
There's that problem of keeping everyone interested and happy in the long run again, but I'm not so sure we should be too worried... yet. Wretchard draws an important distinction, we've laid out a plan to voyage to Mars - not a plan to develop the technology to do so. Efforts will be proceeding on the basis of current technology, but as Wretchard also notes in a different post, current technology may be unsuitable for the task:
Current launch costs are on the order of $8,000/lb, a number that will have to be reduced by a factor of ten for the habitation of the moon, the establishment of La Grange transfer stations or flights to Mars to be feasible. This will require technology, and perhaps even basic physics that does not even exist. Simply building bigger versions of the Saturn V will not work. That would be "like trying to upgrade Columbus?s Nina, Pinta, and Santa Maria with wings to speed up the Atlantic crossing time. A jet airliner is not a better sailing ship. It is a different thing entirely." The dream of settling Mars must await an unforseen development.
Naturally, the unforseen development is notoriously tricky, and while we must pursue alternate forms of propulsion, it would be unwise to hold off on the voyage until this development occurs. We must strike a delicate balance between the concentration on the goal and the means to achieve that goal. As Wretchard notes, this is largely dependant on timing. What is also important here is that we are able to recognize this development when it happens and that we leave our program agile enough to react effectively to this development.

Recognizing this development will prove interesting. At what point does a technology become mature enough to use for something this important? This may be relatively straightforward, but it is possible that we could jump the gun and proceed too early (or, conversely, wait too long). Once recognized, we need to be agile, by which I mean that we must develop the capacity to seamlessly adapt the current program to exploit this new development. This will prove challenging, and will no doubt require a massive increase in funding, as it will also require a certain amount of institutional agility - moving people and resources to where we need them, when we need them. Once we recognize our opportunity, we must pounce without hesitation.

It is a bold and challenging, yet judiciously pragmatic, vision that Bush has laid out, but this is only the first step. The truly important challenges are still a few years off. What is important is that we recognize and exploit any technological advances on our way to Mars, and we can only do so if we are agile enough to effectively react. Exploration of the frontiers is a part of my country's identity, and it is nice to see us proceeding along these lines again. Like the Egyptians so long ago, this mammoth project may indeed inspire a unity amongst our people. In these troubled times, that would be a welcome development. Though Europe, Japan, and China have also shown interest in such an endeavor, I, along with James Lileks, like the idea of an American being the first man on Mars:
When I think of an American astronaut on Mars, I can't imagine a face for the event. I can tell you who staffed the Apollo program, because they were drawn from a specific stratum of American life. But things have changed. Who knows who we'd send to Mars? Black pilot? White astrophysicist? A navigator whose parents came over from India in 1972? Asian female doctor? If we all saw a bulky person bounce out of the landing craft and plant the flag, we'd see that wide blank mirrored visor. Sex or creed or skin hue - we'd have no idea.

This is the quintessence of America: whatever face you'd see when the visor was raised, it wouldn't be a surprise.

Update 1.21.04: More here.
Posted by Mark on January 18, 2004 at 05:16 PM .: link :.

End of This Day's Posts

Tuesday, October 07, 2003

A Compendium of DARPA Programs
The Defense Advanced Research Projects Agency (DARPA) has been widely criticized for several of its more controversial programs, including the now defunct Terrorism Information Awareness program (rightly so) and a Futures Market used to predict terror (perhaps wrongly so), but (as Steven Aftergood has noted) it has not received the credit to which it is arguably entitled for conducting those programs in an unclassified form, in which they can be freely debated, criticized and attacked.

DARPA has recently published a complete descriptive summary of all of its (unclassified) programs, and some of it reads like a science fiction author's wishlist. It's a fascinating collection of programs and it makes for absorbing reading.

I've read a good portion of the report and while I find it impossible to provide a summary (it is, after all, a summary in itself), though I was particularly enthralled by how DARPA is attempting to exploit the intersection of biology, information technology, and physical sciences. For instance:
The Brain Machine Interface Program will create new technologies for augmenting human performance through the ability to noninvasively access codes in the brain in real time and integrate them into peripheral device or system operations.
Essentially this means that they are attempting to create an interface in which a brain accepts and controls a mechanical device as a natural part of it's body. The applications for this are near limitless and, though designed for military applications (of the type you're likely to see in science fiction novels), this technology would be extremely valuable for giving paralysis or amputation patients the ability to control a motorized wheelchair or a prosthetic limb as an extension of their body.

As you might expect, many of the projects work along similar lines and could theoretically provide supporting characteristics to one another. For instance, it seems to me that a brain machine interface would be particularly useful if paired with the Exoskeletons for Human Performance Augmentation program, again creating something right out of science fiction. It also raises some rather interesting questions about our place in evolution, and whether making the transition to a cyborg-like species is inevitiable. I remember Arthur C. Clarke forwarding the idea that as technology progressed far beyond our capabilities, human beings would find a way to transfer their consciousness to a mechanical (or, given the amount of biological engineering going on, let's just say constructed) being, as these machines would be more efficient than the human body. Of course, that is quite far off, but it is interesting to ponder (and Clarke even went further, postulating that we would only spend a short time in our "robot" form and even transcend our physical form...)

Again, I found the biological technologies (as well as many of the nanotechnologies) that are being explored to be the most interesting buch. One such program is attempting to actively collect information from insect populations to map areas for biohazards, another is set to develop biomolecular motors (nanomachines that convert chemical energy into mechanical work at a very high rate of efficiency). There are a lot of programs that utilize BioMagnetics and nanotechnology to attain a better monitoring capability for the human body.

Some of these projects or ideas have been around for a while and many of them are still in preliminary phases, but it is still interesting to see the breadth of ideas DARPA is exploring...

Note: Some of the information in the report is out of date, notably with respect to the "Total Information Awareness" project which was later renamed "Terrorism Information Awareness" and is now defunct.
Posted by Mark on October 07, 2003 at 10:59 PM .: link :.

End of This Day's Posts

Monday, September 08, 2003

My God! It's full of stars!
What Galileo Saw by Michael Benson : A great New Yorker article on the remarkable success of the Galileo probe. James Grimmelmann provides some fantastic commentary:
Launched fifteen years ago with technology that was a decade out of date at the time, Galileo discovered the first extraterrestrial ocean, holds the record for most flybys of planets and moons, pointed out a dual star system, and told us about nine more moons of Jupiter.

Galileo's story is the story of improvisational engineering at its best. When its main 134 KBps antenna failed to open, NASA engineers decided to have it send back images using its puny 10bps antenna. 10 bits per second! 10!

To fit images over that narrow a channel, they needed to teach Galileo some of the tricks we've learned about data compression in the last few decades. And to teach an old satellite new tricks, they needed to upgrade its entire software package. Considering that upgrading your OS rarely goes right here on Earth, pulling off a half-billion-mile remote install is pretty impressive.
And the brilliance doesn't end there:
As if that wasn't enough hacker brilliance, design changes in the wake of the Challenger explosion completely ruled out the original idea of just sending Galileo out to Mars and slingshotting towards Jupiter. Instead, two Ed Harris characters at NASA figured out a triple bank shot -- a Venus flyby, followed by two Earth flybys two years apart -- to get it out to Jupiter. NASA has come in for an awful lot of criticism lately, but there are still some things they do amazingly well.
Score another one for NASA (while you're at it, give Grimmelmann a few points for the Ed Harris reference). Who says NASA can't do anything right anymore? Grimmelmann observes:
The Galileo story points out, I think, that the problem is not that NASA is messed-up, but that manned space flight is messed-up.
Manned spaceflight is, in the Ursula K. LeGuin sense, perverse. It's an act of pure conspicuous waste, like eating fifty hotdogs or memorizing ten thousand digits of pi. We do it precisely because it is difficult verging on insane.
Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.

Such projects are not without their historical equivalents. There are all sorts of theories explaining why the ancient Egyptian pyramids were built, but none are as persuasive as the idea that they were built to unify Egypt's people and cultures. At the time, almost everything was being done on a local scale. With the possible exception of various irrigation efforts that linked together several small towns, there existed no project that would encompass the whole of Egypt. Yes, an insane amount of resources were expended, but the product was truly awe-inspiring, and still is today.

Those who built the pyramids were not slaves, as is commonly thought. They were mostly farmers from the tribes along the River Nile. They depended on the yearly cycle of flooding of the Nile to enrich their fields, and during the months that that their fields were flooded, they were employed to build pyramids and temples. Why would a common farmer give his time and labor to pyramid construction? There were religious reasons, of course, and patriotic reasons as well... but there was something more. Building the pyramids created a certain sense of pride and community that had not existed before. Markings on pyramid casing stones describe those who built the pyramids. Tally marks and names of "gangs" (groups of workers) indicate a sense of pride in their workmanship and respect between workers. The camaraderie that resulted from working together on such a monumental project united tribes that once fought each other. Furthermore, the building of such an immense structure implied an intense concentration of people in a single area. This drove a need for large-scale food-storage among other social constructs. The Egyptian society that emerged from the Pyramid Age was much different from the one that preceded it (some claim that this was the emergance of the state as we now know it.)

"What mattered was not the pyramid - it was the construction of the pyramid." If the pyramid was a machine for social progress, so too can the Space program be a catalyst for our own society.

Much like the pyramids, space travel is a testament to what the human race is capable of. Sure it allows us to do research we couldn't normally do, and we can launch satellites and space-based telescopes from the shuttle (much like pyramid workers were motivated by religion and a sense of duty to their Pharaoh), but the space program also serves to do much more. Look at the Columbia crew - men, women, white, black, Indian, Israeli - working together in a courageous endeavor, doing research for the benefit of mankind, traveling somewhere where few humans have been. It brings people together in a way few endeavors can, and it inspires the young and old alike. Human beings have always dared to "boldly go where no man has gone before." Where would we be without the courageous exploration of the past five hundred years? We should continue to celebrate this most noble of human spirits, should we not?

In the mean time, Galileo is nearing its end. On September 21st, around 3 p.m. EST, Galileo will be vaporized as it plummets toward Jupiter's atmosphere, sending back whatever data it still can. This planned destruction is exactly what has been planned for Galileo; the answer to an intriguing ethical dilemma.
In 1996, Galileo conducted the first of eight close flybys of Europa, producing breathtaking pictures of its surface, which suggested that the moon has an immense ocean hidden beneath its frozen crust. These images have led to vociferous scientific debate about the prospects for life there; as a result, NASA officials decided that it was necessary to avoid the possibility of seeding Europa with alien life-forms.
I had never really given thought to the idea that one of our space probes could "infect" another planet with our "alien" life-forms, though it does make perfect sense. Reaction to the decision among those who worked on Galileo is mixed, most recognizing the rationale, but not wanting to let go anyway (understandable, I guess)...

For more on the pyramids, check out this paper by Marcell Graeff. The information he referenced that I used in this article came primarily from Kurt Mendelssohn's book The Riddle of the Pyramids.

Update 9.25.03 - Steven Den Beste has posted an excellent piece on the Galileo mission and more...
Posted by Mark on September 08, 2003 at 11:06 PM .: link :.

End of This Day's Posts

Sunday, May 25, 2003

Security & Technology
The other day, I was looking around for some new information on Quicksilver (Neal Stephenson's new novel, a follow up to Cryptonomicon) and I came across Stephenson's web page. I like everything about that page, from the low-tech simplicity of its design, to the pleading tone of the subject matter (the "continuous partial attention" bit always gets me). At one point, he gives a summary of a talk he gave in Toronto a few years ago:
Basically I think that security measures of a purely technological nature, such as guns and crypto, are of real value, but that the great bulk of our security, at least in modern industrialized nations, derives from intangible factors having to do with the social fabric, which are poorly understood by just about everyone. If that is true, then those who wish to use the Internet as a tool for enhancing security, freedom, and other good things might wish to turn their efforts away from purely technical fixes and try to develop some understanding of just what the social fabric is, how it works, and how the Internet could enhance it. However this may conflict with the (absolutely reasonable and understandable) desire for privacy.
And that quote got me to thinking about technolology and security, and how technology never really replaces human beings, it just makes certain tasks easier, quicker, and more efficient. There was a lot of talk about this sort of thing around the early 90s, when certain security experts were promoting the use of strong cryptography and digital agents that would choose what products we would buy and spend our money for us.

As it turns out, most of those security experts seem to be changing their mind. There are several reasons for this, chief among them fallibility and, quite frankly, a lack of demand. It is impossible to build an infallible system (at least, it's impossible to recognize that you have built such a system), but even if you had accomplished such a feat, what good would it be? A perfectly secure system is also a perfectly useless system. Besides that, you have human ignorance to contend with. How many of you actually encrypt your email? It sounds odd, but most people don't even notice the little yellow lock that comes up in their browser when they are using a secure site.

Applying this to our military, there are some who advocate technology (specifically airpower) as a replacement for the grunt. The recent war in Iraq stands in stark contrast to these arguments, despite the fact that the civilian planners overruled the military's request for additional ground forces. In fact, Rumsfeld and his civilian advisors had wanted to send significantly fewer ground forces, because they believed that airpower could do virtually everything by itself. The only reason there were as many as there were was because General Franks fought long and hard for increased ground forces (being a good soldier, you never heard him complain, but I suspect there will come a time when you hear about this sort of thing in his memoirs).

None of which is to say that airpower or technology are not necessary, nor do I think that ground forces alone can win a modern war. The major lesson of this war is that we need to have balanced forces in order to respond with flexibility and depth to the varied and changing threats our country faces. Technology plays a large part in this, as it makes our forces more effective and more likely to succeed. But, to paraphrase a common argument, we need to keep in mind that weapons don't fight wars, soldiers do. While technology we used provided us with a great deal of security, its also true that the social fabric of our armed forces were undeniably important in the victory.

One thing Stephenson points to is an excerpt from a Sherlock Holmes novel in which Holmes argues:
...the lowest and vilest alleys in London do not present a more dreadful record of sin than does the smiling and beautiful country-side...The pressure of public opinion can do in the town what the law cannot accomplish...But look at these lonely houses, each in its own fields, filled for the most part with poor ignorant folk who know little of the law. Think of the deeds of hellish cruelty, the hidden wickedness which may go on, year in, year out, in such places, and none the wiser.
Once again, the war in Iraq provides us with a great example. Embedding reporters in our units was a controversial move, and there are several reasons the decision could have been made. One reason may very well have been that having reporters around while we fought the war may have made our troops behave better than they would have otherwise. So when we watch the reports on TV, all we see are the professional, honorable soldiers who bravely fought an enemy which was fighting dirty (because embedding reporters revealed that as well).

Communications technology made embedding reporters possible, but it was the complex social interactions that really made it work (well, to our benefit at least). We don't derive security straight from technology, we use it to bolster our already existing social constructs, and the further our technology progresses, the easier and more efficient security becomes.

Update 6.6.03 - Tacitus discusses some similar issues...
Posted by Mark on May 25, 2003 at 02:03 PM .: link :.

End of This Day's Posts

Sunday, April 06, 2003

Warp Drive Underwater by Steven Ashley : A long time ago, I wrote about Supercavitation here, but apparently missed this article, which covers the subject much more thouroughly. It focuses mostly on the military applications of this technology (though it is applicable to ocean farming and underwater exploration) and it contains a lot of detail on the most famous example of the technology, Russia's VA-111 Shkval (Squall) rocket-torpedo. Some of the details are speculative, but they give a good explaination of the technology as well as some of the main applications, which include high-speed torpedoes, underwater machine-guns armed with supercavitating bullets to help clear mines, among other applications. Underwater mines are a serious nuisance, and an application such as the US RAMICS program would be a huge help... [via Punchstack]
Posted by Mark on April 06, 2003 at 07:13 PM .: link :.

End of This Day's Posts

Monday, December 17, 2001

New Medium, Same Complaints
DVD Menu Design: The Failures of Web Design Recreated Yet Again by Dr. Donald A. Norman (of Nielsen Norman Group fame) : The first time I saw this, I didn't even realize that it wasn't written by Jacob Nielson. I guess they're partners for a reason - Norman writes much the same way that Nielson does, and with the same interface philosophy. This time they're applying the same old boring usability guidelines to DVDs. But just because they are the same doesn't mean they are useless - DVD menus are getting to be ridiculously and unnecessarily complex. There is something to be said for the artistic merit of the menu scheme, but most of the time it ends up being obnoxious (especially upon repeated viewings of the film). Its surprising that most DVDs haven't learned from the mistakes of other mediums. In fact, I'm going to take this opportunity to bitch about DVDs - their interfaces and their content.

  • Animated Menus : Animated entrance and exit sequences are becoming more and more obnoxious. On occasion, I'll run across a DVD that has nice looking sequences, but they are definitely a rarity. I don't need to see a 3 second clip of the movie when all I'm trying to do is turn the commentary on. And Animated Menus don't count as a "Special" Feature.
  • Extra Features :
    • One suggestion mentioned in the above article is to state the duration of each item in the Special Menus, along with a brief description instead of the now, often cryptic titles, often chosen more for cleverness than for informativeness (even more annoying: when the cryptic titles mentioned on the DVD sleeve are different than what actually appears on the disc!).
    • If you have a series of short 1 minute pieces, string them together into a single 20 minute mini-documentary with skippable chapters instead of making me click through each and every one. For example, on the T2: Ultimate Edition, there are something like 50 short pieces concerning makeup, F/X, etc... that are ungodly difficult to navigate.
    • A fifteen minute promotional film consisting of 10 minutes of clips from the film does not count as a documentary.
  • Commentary : A good commentary track is a gem, and I realize that directors like Stanley Kubrick can't be troubled to sit down and talk about their movies (not to mention that he's dead). But even if they can't reanimate Kubrick's corpse, they should be able find someone else to do a good, insightful commentary. Two excellent examples: the commentary by Japanese film expert Michael Jeck on the Seven Samurai DVD and the commentary by Roger Ebert on the Dark City DVD. Both are well done and very interesting, especially in the case of Seven Samurai, which is one of those movies that demands a good commentary (and is one of the few that gets it). I want to see more of this because while it is interesting to hear about the filmmaker's perspective, works of art often take on a life of their own and move beyond anything the filmmaker originally intended.
Don't get me wrong, I love DVDs. I love the quality and all the extra content, but its hard not to complain when only some good movies (and even some bad movies) get nice DVD treatment.
Posted by Mark on December 17, 2001 at 02:39 PM .: link :.

End of This Day's Posts

Tuesday, October 09, 2001

The Fifty Nine Story Crisis
In 1978, William J. LeMessurier, one of the nation's leading structural engineers, received a phone call from an engineering student in New Jersey. The young man was tasked with writing a paper about the unique design of the Citicorp tower in New York. The building's dramatic design was necessitated by the placement of a church. Rather than tear down the church, the designers, Hugh Stubbins and Bill LeMessurier, set their fifty-nine-story tower on four massive, nine-story-high stilts, and positioned them at the center of each side rather than at each corner. This daring scheme allowed the designers to cantilever the building's four corners, allowing room for the church beneath the northwest side.

Thanks to the prodding of the student (whose name was lost in the swirl of subsequent events), LeMessurier discovered a subtle conceptual error in the design of the building's wind braces; they were unusually sensitive to certain kinds of winds known as quartering winds. This alone wasn't cause for worry, as the wind braces would absorb the extra load under normal circumstances. But the circumstances were not normal. Apparently, there had been a crucial change during their manufacture (the braces were fastened together with bolts instead of welds, as welds are generally considered to be stronger than necessary and overly expensive; furthermore the contractors had interpreted the New York building code in such a way as to exempt many of the tower's diagonal braces from loadbearing calculations, so they had used far too few bolts.) which multiplied the strain produced by quartering winds. Statistically, the possibility of a storm severe enough to tear the joint apart was once every sixteen years (what meteorologists call a sixteen year storm). This was alarmingly frequent. To further complicate matters, hurricane season was fast approaching.

The potential for a complete catastrophic failure was there, and because the building was located in Manhattan, the danger applied to nearly the entire city. The fall of the Citicorp building would likely cause a domino effect, wreaking a devestating toll of destruction in New York.

The story of this oversight, though amazing, is dwarfed by the series of events that led to the building's eventual structural integrity. To avert disaster, LeMessurier quickly and bravely blew the whistle - on himself. LeMessurier and other experts immediately drew up a plan in which workers would reinforce the joints by welding heavy steel plates over them.

Astonishingly, just after Citicorp issued a bland and uninformative press release, all of the major newspapers in New York went on strike. This fortuitous turn of events allowed Citicorp to save face and avoid any potential embarrassment. Construction began immediately, with builders and welders working from 5 p.m. until 4 a.m. to apply the steel "band-aids" to the ailing joints. They build plywood boxes around the joints, so as not to disturb the tenants, who remained largely oblivious to the seriousness of the problem.

Instead of lawsuits and public panic, the Citicorp crisis was met with efficient teamwork and a swift solution. In the end, LeMessurier's reputation was enhanced for his courageous honesty, and the story of Citicorp's building is now a textbook example of how to respond to a high-profile, potentially disastrous problem.

Most of this information came from a New Yorker article by Joe Morgenstern (published May 29, 1995) . It's a fascinating story, and I found myself thinking about it during the tragedies of September 11. What if those towers had toppled over in Manhattan? Fortunately, the WTC towers were extremely well designed - they didn't even noticeably rock when the planes hit - and when they did come down, they collapsed in on themselves. They would still be standing today too, if it wasn't for the intense heat that weakened the steel supports.
Posted by Mark on October 09, 2001 at 08:04 AM .: link :.

End of This Day's Posts

Thursday, September 27, 2001

Do minds play dice?
Unpredictability may be built into our brains. Neurophysiologists have found that clusters of nerve cells respond to the same stimulus differently each time, as randomly as heads or tails. The implications of this are far reaching, but I can't say I'm all that suprised. It makes evolutionary sense, in that you can evade (or even launch) attacks better by jumping from side to side. It makes sociological sense, in that a person's environment and upbringing do not necessarily dictate how they will act in the future (the most glaring examples are criminals; surely, their childhood must have been traumatic in order for them to commit such heinous acts). It even makes sense creatively, in that "randomness results in new kinds of behaviour and combinations of ideas, which are essential to the process of discovery".
Posted by Mark on September 27, 2001 at 06:56 PM .: link :.

End of This Day's Posts

Friday, June 22, 2001

Out of This World
Scientific American's Steve Mirsky shows a sense of humor in his story about the drop-off in UFO reports, giving several flippant explanations for the lack of sightings. Some claim that the aliens have completed their survey of Earth, but Mirsky believes the idea that they could complete their survey of Earth in a mere 50 years is both ludicrous and insulting and reasons that they must have run out of their alien government funding. My favourite explanation:
The aliens have finally perfected their cloaking technology. After all, evidence of absence is not absence of evidence (which is, of course, not evidence of absence). Just because we no longer see the aliens doesn't mean they're not there. Actually, they are there; the skies are lousy with them, they're coco-butting one another's bald, veined, throbbing, giant heads over the best orbits. But until they drop the cloak because they've got some beaming to do, we won't see them.
I love the description "bald, veined, throbbing, giant heads". [via Follow Me Here]
Posted by Mark on June 22, 2001 at 01:16 PM .: link :.

End of This Day's Posts

Monday, May 21, 2001

Bending Time and Space with Light
Time twister: New Scientist reports that a professor of theoretical physics, Ronald Mallett, thinks he has found a practical way to make a time machine. Unlike other "time travel" solutions, such as wormholes, Mallett's solution relies heavily on light, a much more down to earth ingredient when compared to the "negative energy" matter used to open wormholes. Even though light doesn't have mass, it does have the quirky ability to bend space-time. Last year, Mallett published a paper describing how a circulating beam of laser light would create a vortex in space within its circle (Physics Letters A, vol 269, p 214).
To twist time into a loop, Mallett worked out that he would have to add a second light beam, circulating in the opposite direction. Then if you increase the intensity of the light enough, space and time swap roles: inside the circulating light beam, time runs round and round, while what to an outsider looks like time becomes like an ordinary dimension of space.
The energy needed to twist time into a loop is enormous, but Mallet saw that the effect of circulating light depends on its velocity: the slower the light, the stronger the distortion in space-time. Light gains inertia as it is slowed down, so "Increasing its inertia increases its energy, and this increases the effect," Mallett says. There is still a lot of work to do to make this process a reality, and it probably won't happen for some "time", but the concept of plausible time travel in our time is intriguing, if only because of the moral and paradoxical issues it raises. The most famous paradox, of course, is going back in time to kill your grandparents, effectively negating your very own existence - but then you wouldn't be able to go back in time, would you? My favourite solution to said paradoxes is the Terminator or Bill and Ted version of time travel in which what you've done in the past has already influenced your present (and future). [via ArsTechnica]
Posted by Mark on May 21, 2001 at 09:35 AM .: link :.

End of This Day's Posts

Tuesday, May 01, 2001

The Earthquake Rose
Earthquakes are generally considered to be nasty, rather destructive events, but after a recent earthquake in Seattle, someone noticed some interesting patterns produced by a sand tracing pendulum (or Foucault Pendulum). The entire pattern resembles an eye (some say Poseidon's eye, for the god of the sea is also the god of earthquakes), but the pupil of said eye, the part of the pattern created by the earthquake, looks very much like a rose (and thus, it is called an Earthquake Rose). It is really quite pretty, and it's fascinating that "such a massive and very destructive release of energy can also contain such delicate artistry within its chaos." [found somewhere I don't remember the name of].
Posted by Mark on May 01, 2001 at 12:22 PM .: link :.

End of This Day's Posts

Monday, April 23, 2001

Vertical City
"Bionic Tower": A 300-story supertall building originally proposed for Hong Kong is now being considered by China's leaders for Shanghai. Its European designers describe it as a "vertical city". It would house 100,000 people and contain hotels, offices, cinemas and hospitals, effectively making it possible (not necissarily preferable) to live an entire life in one building. "Dwarfing Kuala Lumpur's twin Petronas Towers, the world's tallest buildings at 1,483ft high, it would be set in a gigantic, wheel-shaped base incorporating shopping malls and car parks." The designers have devised a root-like system of foundations that would descend 656ft, surrounded by an artificial lake to absorb vibrations caused by any earth tremors. Amazing stuff; it reminds me of the gigantic cities of The Caves of Steel, where cities spanned hundreds of miles and were ultimately self-contained (which caused a nasty fear of open spaces). Such an undertaking is an engineering nightmare. If attempted, it could quite possibly fail miserably - there are so many factors and pitfalls to be avoided, that there are bound to be some unforeseen consequences...[via /.]

If this venture is successful, however, it seems like it would be the world's first successful arcology. From the Arcologies egroup discussion:
Arcology is Paolo Soleri's concept of cities which embody the fusion of architecture with ecology. The arcology concept proposes a highly integrated and compact three-dimensional urban form that is the opposite of urban sprawl with its inherently wasteful consumption of land, energy, time and human resources. An arcology would need about two percent as much land as a typical city of similar population. Arcology eliminates the automobile from inside the city and reserves it for use outside the city. Walking would be the main form of transportation inside an arcology. The miniaturization of the city enables radical conservation of land, energy and resources. Arcology would rely as much as possible on the sun, the wind and other renewable energy so as to reduce pollution and dependence on fossil fuels. Arcology needs less energy per capita thus making recycling and the use of solar energy more feasible than in present cities.
Posted by Mark on April 23, 2001 at 09:42 AM .: link :.

End of This Day's Posts

Tuesday, April 17, 2001

Houston, we have a blue screen of death
Commander William Shepherd kept a mission log during the initial 136-day shift aboard the International Space Station. The log is fun reading, and you can't help but sympathize with many of the frustrations they are constantly facing. As the Laboratorium notes, many of the problems were computer related, and funny as hell. Its a fairly comprehensive list of computer problems, and its quite funny.

While many of those computer systems did have problems, it's important to note just how well NASA's aerospace applications work:
This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats: the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.
Which is really how it should be for something that pilots a space shuttle, but then, writing software for such an focused set of criteria makes things somewhat easier to implement:
Admittedly they have a lot of advantages over the rest of the software world. They have a single product: one program that flies one spaceship. They understand their software intimately, and they get more familiar with it all the time. The group has one customer, a smart one. And money is not the critical constraint: the groups $35 million per year budget is a trivial slice of the NASA pie, but on a dollars-per-line basis, it makes the group among the nation's most expensive software organizations.

And that's the point: the shuttle process is so extreme, the drive for perfection is so focused, that it reveals what's required to achieve relentless execution. The most important things the shuttle group does -- carefully planning the software in advance, writing no code until the design is complete, making no changes without supporting blueprints, keeping a completely accurate record of the code -- are not expensive. The process isn't even rocket science. Its standard practice in almost every engineering discipline except software engineering.
The shuttle software group is one of just four outfits in the world to win the coveted Level 5 ranking of the federal governments Software Engineering Institute ( SEI ) a measure of the sophistication and reliability of the way they do their work. [Thanks to the Laboratorium and norton for all the info]
Posted by Mark on April 17, 2001 at 09:58 AM .: link :.

End of This Day's Posts

Monday, April 02, 2001

The Science Behind
The Science Behind the X-Files is quite well done. Several episodes are broken down into their various scientific elements which are further explained with referenced resources. Fun, informative, and geeky. Thanks to Nothing for pointing that site out. Nothing has a circuitry themed design similar to (and much better than) one of my first designs, except mine had NAND and NOR gates.

The Science Behind Merla's Cosmatron is also interesting. Remember Voltron? Who knew they were teaching me about sub-atomic particles... Those who examine the fake webcam pictures carefully have observed a Voltron-like object in the background...
Posted by Mark on April 02, 2001 at 07:44 PM .: link :.

End of This Day's Posts

Wednesday, March 07, 2001

Faster than a Speeding Bullet
Supercavitation essentially creates a gas bubble around all but the very nose of a projectile in order to virtually eliminate water drag and achieve high speeds (possibly breaking the sound barrier). The technology is real and the applications range from peaceful ocean farming and exploration of Jupiter's moon Europa to supercavitating weaponry like torpedoes and bullets. However, there appears to be plenty of obstacles (like steering, constantly changing pressures etc..) preventing such an occurrance. "Mastery of supercavitation could turn the quiet chess game of submarine warfare we know today into a mirror image of the hyper-kinetic world of aerial combat." The cinematic possibilites alone make this phenomenon intriguing. Imagine Top Gun under water. Take note, Hollywood. This could make the basis for a great movie. [thanks to F2 and metascene]
Posted by Mark on March 07, 2001 at 09:05 AM .: link :.

End of This Day's Posts

Friday, March 02, 2001

Lab Work
Its nice to see that someone writes lab reports the way I used to. I especially liked his conclusions: "Going into physics was the biggest mistake of my life. I should've declared CS. I still wouldn't have any women, but at least I'd be rolling in cash."
Posted by Mark on March 02, 2001 at 11:10 AM .: link :.

End of This Day's Posts

Tuesday, January 30, 2001

Ginger for Sale?
It seems that amazon is now taking orders for IT, otherwise known as Ginger. Of course, they still don't know what it is, what it does, or how much it will cost, but apparently doesn't stop people from buying it. The mystery thickens.
Posted by Mark on January 30, 2001 at 01:25 PM .: link :.

End of This Day's Posts

Monday, January 29, 2001

Read My Mind
Mind reading. It seems fantastical, but it may be true. A team of Italian neurophysiologists have discovered so called "mirror" neurons in the brain which seem to be firing in sympathy, reflecting or perhaps simulating the actions of other people. For instance, if I were to slap myself in the face, a certain set of neurons in my brain would be firing in order to make this act of stupidity happen. And if you happen to witness my moronic act, the very same set of neurons will fire in your brain (though you won't be slapping yourself silly). This discovery could go a long way in explaining things like why people are so damn imitative, how we developed language, and also why people can instantly understand how you are feeling just by observing your actions. Some people are referring to this as "mind reading", but it seems to be acting more like an advanced simulation to me. Basically, when I observe someone doing something, my brain instinctively simulates the action (by firing the appropriate neurons) and makes conclusions based on what happens. Though it may not be mind reading, it is certainly a big step forward for psychologists.

An interesting side note regarding mind reading. Some people believe we have an innate but repressed form of mind reading that sometimes surfaces in the form of "intuition" or even physical illness when faced with danger. The human brain only operates at somewhere around 10-20% efficiency, with occasional jumps to 25-30% (which is usually referred to as intuition or revelation and is associated with a possible decline in physical health). For instance, take this entry found in Wierd but True:
"train wrecks: in train wrecks the number of passengers in damaged cars is less than average by so much and so often that it cannot be a chance occurrence. somehow we know not to get on them. (work done by william cox and reported by lyall watson)"
I've heard of similar statistics referring to airplanes as well. Many planes that crash are only half full; people who didn't get on the plane just had a "bad feeling" about it or actually got sick and were unable to fly. What are our brains really capable of?
Posted by Mark on January 29, 2001 at 02:58 PM .: link :.

End of This Day's Posts

Thursday, January 25, 2001

Faith in Mathematics
Why I Like Math By Matt Stone. Nice story of a man's search for meaning and finding it through mathematics ("I became aware of an underlying superstructure that tied all my math knowledge together. "). Why is it that people think religion is only comforting? Comfort is one aspect of religion, yes, but it is not everything. In many cases, I would even go so far as to say that religion is no more comforting than any other system of beliefs (be it scientific, atheistic, agnostic, or, in this case, mathematics). My naive optimism has more to do with my happiness than my religion (then again, I suppose religion has infulenced my optimism). In the end, I don't think religion is as important as most people think. It plays a small part in many aspects of life, but it does not (at least, it should not) dominate everthing. [via metascene]
Posted by Mark on January 25, 2001 at 09:25 AM .: link :.

End of This Day's Posts

Tuesday, January 09, 2001

What is the colour of five? What does blue taste like? Believe it or not, some people can answer these questions. These people have an rare variety of perception called synesthesia. Synesthesia literally means joined sensations, a condition that causes certain sensations to "leak" into one another. Its much deeper than a simple association or metaphor; synesthetes don't think about a sound when they see a colour, they actually hear the sound! This raises all sorts of questions regarding our view of the world and reality. Do we all have an innate form of synesthesia, possibly repressed? Who knows, but the more I think about this condition the less I'm suprised (and the more I realize how little we know about ourselves). Yet another bizarre scientific discovery...
Posted by Mark on January 09, 2001 at 04:44 PM .: link :.

End of This Day's Posts

Tuesday, December 26, 2000

Dr. Humanity or How I Learned to Stop Worrying and Love the Genome
The Human Genome in Human Context: Scientists recently announced that they had virtually completed the task of mapping the human genome. The implications of such an event vary. Some believe it will usher in a new era of Genetic Engineering, complete with a multitude of ethical fears such as the insurability of people with genetically identifiable risks for diseas or the creation of an entirely new form of Humanity. The author of the article believes that we really don't have much to worry about right now. While we may have mapped the genome, we have do not yet know how to apply it. Some quotes from the article:
"Enhancements in human abilities that may come through genetic engineering will in most cases be negligible compared to those already achieved, or achievable in the future, through tools."
"The problem is compounded by the fact that the relation of genes to traits is not one�to�one. Some traits are influenced by many genes, and some genes influence many traits. The law of unintended consequences is therefore bound to operate with a vengeance."
"...there is already quite conclusive evidence that human behavior, though strongly conditioned by genetics, is not completely determined by it. "
All in all, a fascinating article and a refreshing change from the typical Horrors of Genetics diatribe. I don't think we'll be heading for a world like the one presented in the film Gattaca any time soon...
Posted by Mark on December 26, 2000 at 03:10 PM .: link :.

End of This Day's Posts

Thursday, December 21, 2000

Wierd but True
This site contains various (suprisingly insightful and referenced) blurbs about strange phenomena that occur. What an odd world we live in. Its amazing how little we know about it. [found in the bowels of kottke]
Posted by Mark on December 21, 2000 at 11:33 PM .: link :.

End of This Day's Posts

Friday, December 15, 2000

The Designer Universe
Do we live in a "designer universe"? The laws of nature seemed fine-tuned for conscious life to emerge; if the fundamental constants of physics are off by only a hair, the universe would have been a lifeless dud (no stars, no stable elements, etc...) This reminds me of one of Thomas Aquinas' 5 Ways (order in the universe implies an intelligent creator that we call God), and the finely tuned universe seems to support some sort of Cosmic Designer. However, the Cosmic Designer Hypothesis is only one way of explaining the improbable fine-tuning of natures laws (and it is flawed to begin with). Theres the "Big Fluke Hypothesis", which doesn't provide much of an explaination, and then there is the "Many Universes Hypothesis", which claims that there are, suprise, many universes (perhaps an infinite amount), the idea being that we live in the lucky one universe where everything came together. All the theories have their own advantages and disadvantages, and its quite fun to ponder why our world is the way it is...
Posted by Mark on December 15, 2000 at 01:11 PM .: link :.

End of This Day's Posts

Thursday, December 07, 2000

Taking Ballistics by Storm: An electronic gun with no mechanical parts that could theoretically fire a million rounds per minute. It was invented by former grocery wholesaler Mike O'Dwyer. I can't believe this guy, who has no formal education in ballistics, didn't kill himself while inventing this thing. [via usr/bin/girl]
Posted by Mark on December 07, 2000 at 05:14 PM .: link :.

End of This Day's Posts

Tuesday, December 05, 2000

Big Brother is Watching, Listening, Reading...
This one goes out to all the paranoid British visitors of my site: Apparently there is a Secret plan to spy on all British phone calls as well as emails and internet connections. Very scary.
Posted by Mark on December 05, 2000 at 03:56 PM .: link :.

End of This Day's Posts

Sunday, November 19, 2000

Just in time for the Holidays
Although their utility is unclear, just imagine what that guy who figured out the healing potential of testicles could do with this. Be afraid. Be very afraid.
Posted by Mark on November 19, 2000 at 10:26 PM .: link :.

End of This Day's Posts

Friday, November 17, 2000

None of them knew they were robots
Ok, we've already established that scientists are clever. We get it. Now, lets ponder how on earth they figure some of these things out. Scientist have recently discovered that they could help stroke victims recover more quickly by implanting testicle cells into patients' brains. What?! I want to know what possessed scientists to induce strokes in rats, then put testicle cells in their brain.

In mathematics news, there are signs that the Riemann hypothesis (probably the most famous problem in mathematics) is close to being proven. The Riemann hypothesis has to do with Prime Numbers and their distribution (it is speculated that their distribution is chaotic). Apparently, those clever scientists I keep marvelling at have found a link between the Riemann hypothesis and the physical world. If this connection proves to be true, it would be a huge boost (there are tons of proofs in mathematics that start: Assuming the Riemann hypothesis is true...) to our understanding of the universe...
Posted by Mark on November 17, 2000 at 02:06 PM .: link :.

End of This Day's Posts

Tuesday, October 17, 2000

Nirvana on the Freeway
Another interesting article concerning traffic congestion suggests that certain traffic densities "transform the whole mess into a state of crystalline harmony". However, this state is extremely sensitive, which probably explains why I have never witnessed said "crystalline harmony" in traffic.

I know this whole traffic jam situation seems hopeless, but this guy claims there is hope and he goes into fairly deep detail about the whole situation. This article is excellent, and I even tried some of his "solutions" and they appeared to get me to my destination quicker than usual, though I really fail to see how my driving can affect the people in front of me (though I can see that the people behind me are in a state of uniform movement, which is pretty damn cool).

Some things I noticed people doing in their cars while waiting at the Tollbooths of the PA Turnpike:
  • Playing with something hanging off their rearview mirror
  • Dancing retardedly to their stupid music (wich is being played way to loud)
  • Reading a novel
  • Washing off their windshield
  • Picking their nose (my personal favourite)
So the next time you get stuck in traffic and feel the compulsive need to pick your nose, remember, people can see through your windows. Then pick your nose anyway.
Posted by Mark on October 17, 2000 at 02:00 PM .: link :.

End of This Day's Posts

Monday, October 16, 2000

Traffic Week
Since I have been spending the better part of my recent life stuck in traffic, I've become intrigued with the ebb and flow of stop and go. "Scientists said they are closer to comprehending the birth of the universe than the daily tie-ups along Interstate 66." Joy. It doesn't help that the road system is not being expanded to handle the increased volume (ie. more cars, no new roads). Then again, some say the problem is congestion, not lack of roads (more lanes means more congestion)... Not to mention that roads, specifically in the northeast, are in a constant state of (dis)repair due to increasing volume and the extremes of weather. More joy.
Posted by Mark on October 16, 2000 at 09:51 AM .: link :.

End of This Day's Posts

Friday, October 13, 2000

Most people are aware that scientists are bright guys. Wery intelligent, they are. But faster-than-light light? This is insanity (, max - 'or maybe its genius')! Apparently scientists have figured out a way to have light exit a box before it even enters. Mindbending shite. I need a drink.

BTW, Amazon is back to its old bloated self. Damn.
Posted by Mark on October 13, 2000 at 08:44 AM .: link :.

End of This Day's Posts

Where am I?
This page contains entries posted to the Kaedrin Weblog in the Science & Technology Category.

Inside Weblog
Best Entries
Fake Webcam
email me
Kaedrin Beer Blog

August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000

12 Days of Christmas
2006 Movie Awards
2007 Movie Awards
2008 Movie Awards
2009 Movie Awards
2010 Movie Awards
2011 Fantastic Fest
2011 Movie Awards
2012 Movie Awards
2013 Movie Awards
2014 Movie Awards
2015 Movie Awards
6 Weeks of Halloween
Arts & Letters
Atari 2600
Best Entries
Book Queue
Comic Books
Commodore 64
Computers & Internet
Disgruntled, Freakish Reflections
Harry Potter
Hugo Awards
Link Dump
Neal Stephenson
Philadelphia Film Festival 2006
Philadelphia Film Festival 2008
Philadelphia Film Festival 2009
Philadelphia Film Festival 2010
Science & Technology
Science Fiction
Security & Intelligence
The Dark Tower
Video Games
Weird Book of the Week
Weird Movie of the Week
Green Flag

Copyright © 1999 - 2012 by Mark Ciocco.