|Science & Technology|
Sunday, September 15, 2013
The Myth of Digital Distribution
The movie lover's dream service would be something we could subscribe to that would give us a comprehensive selection of movies to stream. This service is easy to conceive, and it's such an alluring idea that it makes people want to eschew tried-and-true distribution methods like DVDs and Blu-Ray. We've all heard the arguments before: physical media is dead, streaming is the future. When I made the move to Blu-Ray about 6 years ago, I estimated that it would take at least 10 years for a comprehensive streaming service to become feasible. The more I see, the more I think that I drastically underestimated that timeline... and am beginning to feel like it might never happen at all.
MGK illustrates the problem well with this example:
this is the point where someone says "but we're all going digital instead" and I get irritated by this because digital is hardly an answer. First off, renting films - and when you "buy" digital movies, that's what you're doing almost every single time - is not the same as buying them. Second, digital delivery is getting more and more sporadic as rights get more and more expensive for distributors to purchase.Situations like this are an all too common occurrence, and not just with movies. It turns out that content owners can't be bothered with a title unless it's either new or in the public domain. This graph from a Rebecca Rosen article nicely illustrates the black hole that our extended copyright regime creates:
[The graph] reveals, shockingly, that there are substantially more new editions available of books from the 1910s than from the 2000s. Editions of books that fall under copyright are available in about the same quantities as those from the first half of the 19th century. Publishers are simply not publishing copyrighted titles unless they are very recent.More interpretation:
This is not a gently sloping downward curve! Publishers seem unwilling to sell their books on Amazon for more than a few years after their initial publication. The data suggest that publishing business models make books disappear fairly shortly after their publication and long before they are scheduled to fall into the public domain. Copyright law then deters their reappearance as long as they are owned. On the left side of the graph before 1920, the decline presents a more gentle time-sensitive downward sloping curve.This is absolutely absurd, though it's worth noting that it doesn't control for used books (which are generally pretty easy to find on Amazon) and while content owners don't seem to be rushing to digitize their catalog, future generations won't experience the same issue we're having with the 80s and 90s. Actually, I suspect they will have trouble with 80s and 90s content, but stuff from 2010 should theoretically be available on an indefinite basis because anything published today gets put on digital/streaming services.
Of course, intellectual property law being what it is, I'm sure that new proprietary formats and readers will render old digital copies obsolete, and once again, consumers will be hard pressed to see that 15 year old movie or book ported to the latest-and-greatest channel. It's a weird and ironic state of affairs when the content owners are so greedy in hoarding and protecting their works, yet so unwilling to actually, you know, profit from them.
I don't know what the solution is here. There have been some interesting ideas about having copyright expire for books that have been out of print for a certain period of time (say, 5-10 years), but that would only work now - again, future generations will theoretically have those digital versions available. They may be in a near obsolete format, but they're available! It doesn't seem likely that sensible copyright reform could be passed, but it would be nice to see if we could take a page from the open source playbook, but I'm seriously doubting that content owners would ever be that forward thinking.
As MGK noted, DVD ushered in an era of amazing availability, but much of that stuff has gone out of print, and we somehow appear to be regressing from that.
Posted by Mark on September 15, 2013 at 06:03 PM .: link :.
Wednesday, May 08, 2013
I have, for the most part, been very pleased with using my Kindle Touch to read over the past couple years. However, while it got the job done, I felt like there were a lot of missed opportunities, especially when it came to metadata and personal metrics. Well, Amazon just released a new update to their Kindle software, and mixed in with the usual (i.e. boring) updates to features I don't use (like "Whispersinc" or Parental Controls), there was this little gem:
The Time To Read feature uses your reading speed to let you know how much time is left before you finish your chapter or before you finish your book. Your specific reading speed is stored only on your Kindle Touch; it is not stored on Amazon servers.Hot damn, that's exactly what I was asking for! Of course, it's all locked down and you can't really see what your reading speed is (or plot it over time, or by book, etc...), but this is the single most useful update to a device like this that I think I've ever encountered. Indeed, the fact that it tells you how much time until you finish both your chapter and the entire book is extremely useful, and it addresses my initial curmudgeonly complaints about the Kindle's hatred of page numbers and love of percentage.
Will finish this book in about 4 hours!
And I love that they give a time to read for both the current chapter and the entire book. One of the frustrating things about reading an ebook is that you never really knew how long it will take to read a chapter. With a physical book, you can easily flip ahead and see where the chapter ends. Now, ebooks have that personalized time, which is perfect.
I haven't spent a lot of time with this new feature, but so far, I love it. I haven't done any formal tracking, but it seems accurate, too (it seems like I'm reading faster than it says, but it's close). It even seems to recognize when you've taken a break (though I'm not exactly sure of that). Of course, I would love it if Amazon would allow us access to the actual reading speed data in some way. I mean, I can appreciate their commitment to privacy, and I don't think that needs to change either; I'd just like to be able to see some reports on my actual reading speed. Plot it over time, see how different books impact speed, and so on. Maybe I'm just a data visualization nerd, but think of the graphs! I love this update, but they're still only scratching the surface here. There's a lot more there for the taking. Let's hope we're on our way...
Posted by Mark on May 08, 2013 at 08:42 PM .: link :.
Wednesday, April 24, 2013
The State of Streaming
So Netflix has had a good first quarter, exceeding expectations and crossing the $1 Billion revenue threshold. Stock prices have been skyrocketing, going from sub 100 to over 200 in just the past 4-5 months. Their subscriber base continues to grow, and fears that people would use the free trial to stream exclusive content like House of Cards, then bolt from the service seem unfounded. However, we're starting to see a fundamental shift in the way Netflix is doing business here. For the first time ever, I'm seeing statements like this:
As we continue to focus on exclusive and curated content, our willingness to pay for non-exclusive, bulk content deals declines.I don't like the sound of that, but then, the cost of non-exclusive content seems to keep rising at an absurd level, and well, you know, it's not exclusive. The costs have risen to somewhere on the order of $2 billion per year on content licensing and original shows. So statements like this seem like a natural outgrowth of that cost:
As we've gained experience, we've realized that the 20th documentary about the financial crisis will mostly just take away viewing from the other 19 such docs, and instead of trying to have everything, we should strive to have the best in each category. As such, we are actively curating our service rather than carrying as many titles as we can.And:
We don't and can't compete on breadth with Comcast, Sky, Amazon, Apple, Microsoft, Sony, or Google. For us to be hugely successful we have to be a focused passion brand. Starbucks, not 7-Eleven. Southwest, not United. HBO, not Dish.This all makes perfect sense from a business perspective, but as a consumer, this sucks. I don't want to have to subscribe to 8 different services to watch 8 different shows that seem interesting to me. Netflix's statements and priorities seem to be moving, for the first time, away from a goal of providing a streaming service with a wide, almost comprehensive selection of movies and television. Instead, we're getting a more curated approach coupled with original content. That wouldn't be the worst thing ever, but Netflix isn't the only one playing this game. Amazon just released 14 pilot episodes for their own exclusive content. I'm guessing it's only a matter of time before Hulu joins this roundalay (and for all I know, they're already there - I've just hated every experience I've had with Hulu so much that I don't really care to look into it). HBO is already doing its thing with HBO Go, which exlcusively streams their shows. How many other streaming services will I have to subscribe to if I want to watch TV (or movies) in the future? Like it or not, fragmentation is coming. And no one seems to be working on a comprehensive solution anymore (at least, not in a monthly subscription model - Amazon and iTunes have pretty good a la carte options). This is frustrating, and I feel like there's a big market for this thing, but at the same time, content owners seem to be overcharging for their content. If Netflix's crappy selection costs $2 billion a year, imagine what something even remotely comprehensive would cost (easily 5-10 times that amount, which is clearly not feasible).
Incidentally, Netflix's third exclusive series, Hemlock Grove, premiered this past weekend. I tried to watch the first episode, but I fell asleep. What I remember was pretty shlockey and not particularly inspiring... but I have a soft spot for cheesy stuff like this, so I'll give it another chance. Still, the response seems a bit mixed on this one. I did really end up enjoying House of Cards, but I'm not sure how much I'm going to stick with Hemlock Grove...
Posted by Mark on April 24, 2013 at 09:28 PM .: link :.
Sunday, January 06, 2013
What's in a Book Length?
I mentioned recently that book length is something that's been bugging me. It seems that we have a somewhat elastic relationship with length when it comes to books. The traditional indicator of book length is, of course, page number... but due to variability in font size, type, spacing, format, media, and margins, the hallowed page number may not be as concrete as we'd like. Ebooks theoretically provide an easier way to maintain a consistent measurement across different books, but it doesn't look like anyone's delivered on that promise. So how are we to know the lengths of our books? Fair warning, this post is about to get pretty darn nerdy, so read on at your own peril.
In terms of page numbers, books can vary wildly. Two books with the same amount of pages might be very different in terms of actual length. Let's take two examples: Gravity's Rainbow (784 pages) and Harry Potter and the Goblet of Fire (752 pages). Looking at page number alone, you'd say that Gravity's Rainbow is only slightly longer than Goblet of Fire. With the help of the magical internets, let's a closer look at the print inside the books (click image for a bigger version):
Ebooks present a potential solution. Because Ebooks have different sized screens and even allow the reader to choose font sizes and other display options, page numbers start to seem irrelevant. So Ebook makers devised what's called reflowable documents, which adapt their presentation to the output device. For example, Amazon's Kindle uses an Ebook format that is reflowable. It does not (usually) feature page numbers, instead relying on a percentage indicator and the mysterious "Location" number.
The Location number is meant to be consistent, no matter what formatting options you're using on your ereader of choice. Sounds great, right? Well, the problem is that the Location number is pretty much just as arbitrary as page numbers. It is, of course, more granular than a page number, so you can easily skip to the exact location on multiple devices, but as for what actually constitutes a single "Location Number", that is a little more tricky.
In looking around the internets, it seems there is distressingly little information about what constitutes an actual Location. According to this thread on Amazon, someone claims that: "Each location is 128 bytes of data, including formatting and metadata." This rings true to me, but unfortunately, it also means that the Location number is pretty much meaningless.
The elastic relationship we have with book length is something I've always found interesting, but what made me want to write this post was when I wanted to pick a short book to read in early December. I was trying to make my 50 book reading goal, so I wanted something short. In looking through my book queue, I saw Alfred Bester's classic SF novel The Stars My Destination. It's one of those books I consistently see at the top of best SF lists, so it's always been on my radar, and looking at Amazon, I saw that it was only 236 pages long. Score! So I bought the ebook version and fired up my Kindle only to find that in terms of locations, it's the longest book I have on my Kindle (as of right now, I have 48 books on there). This is when I started looking around at Locations and trying to figure out what they meant. As it turns out, while the Location numbers provide a consistent reference within the book, they're not at all consistent across books.
I did a quick spot check of 6 books on my Kindle, looking at total Location numbers, total page numbers (resorting to print version when not estimated by Amazon), and file size of the ebook (in KB). I also added a column for Locations per page number and Locations per KB. This is an admittedly small sample, but what I found is that there is little consistency among any of the numbers. The notion of each Location being 128 bytes of data seems useful at first, especially when you consider that the KB information is readily available, but because that includes formatting and metadata, it's essentially meaningless. And the KB number also includes any media embedded in the book (i.e. illustrations crank up the KB, which distorts any calculations you might want to do with that data).
It turns out that The Stars My Destination will probably end up being relatively short, as the page numbers would imply. There's a fair amount of formatting within the book (which, by the way, doesn't look so hot on the Kindle), and doing spot checks of how many Locations I pass when cycling to the next screen, it appears that this particular ebook is going at a rate of about 12 Locations per cycle, while my previous book was going at a rate of around 5 or 6 per cycle. In other words, while the total Locations for The Stars My Destination were nearly twice what they were for my previously read book, I'm also cycling through Locations at double the rate. Meaning that, basically, this is the same length as my previous book.
Various attempts have been made to convert Location numbers to page numbers, with low degrees of success. This is due to the generally elastic nature of a page, combined with the inconsistent size of Locations. For most books, it seems like dividing the Location numbers by anywhere from 12-16 (the linked post posits dividing by 16.69, but the books I checked mostly ranged from 12-16) will get you a somewhat accurate page number count that is marginally consistent with print editions. Of course, for The Stars My Destination, that won't work at all. For that book, I have to divide by 40.86 to get close to the page number.
Why is this important at all? Well, there's clearly an issue with ebooks in academia, because citations are so important for that sort of work. Citing a location won't get readers of a paper anywhere close to a page number in a print edition (whereas, even using differing editions, you can usually track down the quote relatively easily if a page number is referenced). On a personal level, I enjoy reading ebooks, but one of the things I miss is the easy and instinctual notion of figuring out how long a book will take to read just by looking at it. Last year, I was shooting for reading quantity, so I wanted to tackle shorter books (this year, I'm trying not to pay attention to length as much and will be tackling a bunch of large, forbidding tomes, but that's a topic for another post)... but there really wasn't an easily accessible way to gauge the length. As we've discovered, both page numbers and Location numbers are inconsistent. In general, the larger the number, the longer the book, but as we've seen, that can be misleading in certain edge cases.
So what is the solution here? Well, we've managed to work with variable page numbers for thousands of years, so maybe no solution is really needed. A lot of newer ebooks even contain page numbers (despite the variation in display), so if we can find a way to make that more consistent, that might help make things a little better. But the ultimate solution would be to use something like Word Count. That's a number that might not be useful in the midst of reading a book, but if you're really looking to determine the actual length of the book, Word Count appears to be the best available measurement. It would also be quite easily calculated for ebooks. Is it perfect? Probably not, but it's better than page numbers or location numbers.
In the end, I enjoy using my Kindle to read books, but I wish they'd get on the ball with this sort of stuff. If you're still reading this (Kudos to you) and want to read some more babbling about ebooks and where I think they should be going, check out my initial thoughts and my ideas for additional metadata and the gamification of reading. The notion of ereaders really does open up a whole new world of possibilities... it's a shame that Amazon and other ereader companies keep their platforms so locked down and uninteresting. Of course, reading is its own reward, but I really feel like there's a lot more we can be doing with our ereader software and hardware.
Posted by Mark on January 06, 2013 at 08:02 PM .: link :.
Wednesday, August 08, 2012
Web browsers I have known, 1996-2012
Jason Kottke recently recapped all of the browsers he used as his default for the past 18 years. It sounded like fun, so I'm going to shamelessly steal the idea and list out my default browsers for the past 16 years (prior to 1996, I was stuck in the dark ages of dialup AOL - but once I went away to college and discovered the joys of T1/T3 connections, my browsing career started in earnest, so that's when I'm starting this list).
Posted by Mark on August 08, 2012 at 09:23 PM .: link :.
Wednesday, April 11, 2012
More Disgruntled, Freakish Reflections on ebooks and Readers
While I have some pet peeves with the Kindle, I've mostly found it to be a good experience. That being said, there are some things I'd love to see in the future. These aren't really complaints, as some of this stuff isn't yet available, but there are a few opportunities afforded by the electronic nature of eBooks that would make the whole process better.
Posted by Mark on April 11, 2012 at 09:22 PM .: link :.
Wednesday, February 15, 2012
Last week, I looked at commonplace books and various implementation solutions. Ideally, I wanted something open and flexible that would also provide some degree of analysis in addition to the simple data aggregation most tools provide. I wanted something that would take into account a wide variety of sources in addition to my own writing (on this blog, for instance). Most tools provide a search capability of some kind, but I was hoping for something more advanced. Something that would make connections between data, or find similarities with something I'm currently writing.
At a first glance, Zemanta seemed like a promising candidate. It's a "content suggestion engine" specifically built for blogging and it comes pre-installed on a lot of blogging software (including Movable Type). I just had to activate it, which was pretty simple. Theoretically, it continually scans a post in progress (like this one) and provides content recommendations, ranging from simple text links defining key concepts (i.e. links to Wikipedia, IMDB, Amazon, etc...), to imagery (much of which seems to be integrated with Flickr and Wikipedia), to recommended blog posts from other folks' blogs. One of the things I thought was really neat was that I could input my own blogs, which would then give me more personalized recommendations.
Unfortunately, results so far have been mixed. There are some things I really like about Zemanta, but it's pretty clearly not the solution I'm looking for. Some assorted thoughts:
I will probably continue to play with Zemanta, but I suspect it will be something that doesn't last much longer. It provides some value, but it's ultimately not as convenient as I'd like, and it's analysis and recommendation functions don't seem as useful as I'd like.
I've also been playing around with Evernote more and more, and I feel like that could be a useful tool, despite the fact that it doesn't really offer any sort of analysis (though it does have a simple search function). There's at least one third party, though, that seems to be positioning itself as an analysis tool that will integrate with Evernote. That tool is called Topicmarks. Unfortunately, I seem to be having some issues integrating my Evernote data with that service. At this rate, I don't know that I'll find a great tool for what I want, but it's an interesting subject, and I'm guessing it will be something that will become more and more important as time goes on. We're living in the Information Age, it seems only fair that our aggregation and analysis tools get more sophisticated.
Posted by Mark on February 15, 2012 at 06:08 PM .: link :.
Wednesday, February 08, 2012
During the Enlightenment, most intellectuals kept what's called a Commonplace Book. Basically, folks like John Locke or Mark Twain would curate transcriptions of interesting quotes from their readings. It was a personalized record of interesting ideas that the author encountered. When I first heard about the concept, I immediately started thinking of how I could implement one... which is when I realized that I've actually been keeping one, more or less, for the past decade or so on this blog. It's not very organized, though, and it's something that's been banging around in my head for the better part of the last year or so.
Locke was a big fan of Commonplace Books, and he spent years developing an intricate system for indexing his books' content. It was, of course, a ridiculous and painstaking process, but it worked. Fortunately for us, this is exactly the sort of thing that computer systems excel at, right? The reason I'm writing this post is a small confluence of events that has lead me to consider creating a more formal Commonplace Book. Despite my earlier musing on the subject, this blog doesn't really count. It's not really organized correctly, and I don't publish all the interesting quotes that I find. Even if I did, it's not really in a format that would do me much good. So I'd need to devise another plan.
Why do I need a plan at all? What's the benefit of a commonplace book? Well, I've been reading Steven Johnson's book Where Good Ideas Come From: The Natural History of Innovation and he mentions how he uses a computerized version of the commonplace book:
For more than a decade now, I have been curating a private digital archive of quotes that I've found intriguing, my twenty-first century version of the commonplace book. ... I keep all these quotes in a database using a program called DEVONthink, where I also store my own writing: chapters, essays, blog posts, notes. By combining my own words with passages from other sources, the collection becomes something more than just a file storage system. It becomes a digital extension of my imperfect memory, an archive of all my old ideas, and the ideas that have influenced me.This DEVONthink software certainly sounds useful. It's apparently got this fancy AI that will generate semantic connections between quotes and what you're writing. It's advanced enough that many of those connections seem to be subtle and "lyrical", finding connections you didn't know you were looking for. It sounds perfect except for the fact that it only runs on Mac OSX. Drats. It's worth keeping in mind in case I ever do make the transition from PC to Mac, but it seems like lunacy to do so just to use this application (which, for all I know, will be useless to me).
As sheer happenstance, I've also been playing around with Pinterest lately, and it occurs to me that it's a sort of commonplace book, albeit one with more of a narrow focus on images and video (and recipes?) than quotes. There are actually quite a few sites like that. I've been curating a large selection of links on Delicious for years now (1600+ links on my account). Steven Johnson himself has recently contributed to a new web startup called Findings, which is primarily concerned with book quotes. All of this seems rather limiting, and quite frankly, I don't want to be using 7 completely different tools to do the same thing, but for different types of media.
I also took a look at Tumblr again, this time evaluating it from a commonplacing perspective. There are some really nice things about the interface and the ease with which you can curate your collection of media. The problem, though, is that their archiving system is even more useless than most blog software. It's not quite the hell that is Twitter archives, but that's a pretty low bar. Also, as near as I can tell, the data is locked up on their server, which means that even if I could find some sort of indexing and analysis tool to run through my data, I won't really be able to do so (Update: apparently Tumblr does have a backup tool, but only for use with OSX. Again!? What is it with you people? This is the internet, right? How hard is it to make this stuff open?)
Evernote shows a lot of promise and probably warrants further examination. It seems to be the go-to alternative for lots of researchers and writers. It's got a nice cloud implementation with a robust desktop client and the ability to export data as I see fit. I'm not sure if its search will be as sophisticated as what I ultimately want, but it could be an interesting tool.
Ultimately, I'm not sure the tool I'm looking for exists. DEVONthink sounds pretty close, but it's hard to tell how it will work without actually using the damn thing. The ideal would be a system where you can easily maintain a whole slew of data and metadata, to the point where I could be writing something (say a blog post or a requirements document for my job) and the tool would suggest relevant quotes/posts based on what I'm writing. This would probably be difficult to accmomplish in real-time, but a "Find related content" feature would still be pretty awesome. Anyone know of any alternatives?
Update: Zemanta! I completely forgot about this. It comes installed by default with my blogging software, but I had turned it off a while ago because it took forever to load and was never really that useful. It's basically a content recommendation engine, pulling content from lots of internet sources (notably Wikipedia, Amazon, Flickr and IMDB). It's also grown considerably in the time since I'd last used it, and it now features a truckload of customization options, including the ability to separate general content recommendations from your own, personally curated sources. So far, I've only connected my two blogs to the software, but it would be interesting if I could integrate Zemanta with Evernote, Delicious, etc... I have no idea how great the recommendations will be (or how far back it will look on my blogs), but this could be exactly what I was looking for. Even if integration with other services isn't working, I could probably create myself another blog just for quotes, and then use that blog with Zemanta. I'll have to play around with this some more, but I'm intrigued by the possibilities
Posted by Mark on February 08, 2012 at 05:31 PM .: link :.
Wednesday, January 18, 2012
I was going to write the annual arbitrary movie awards tonight, but since the web has apparently gone on strike, I figured I'd spend a little time talking about that instead. Many sites, including the likes of Wikipedia and Reddit, have instituted a complete blackout as part of a protest against two ill-conceived pieces of censorship legislation currently being considered by the U.S. Congress (these laws are called the Stop Online Piracy Act and Protect Intellectual Property Act, henceforth to be referred to as SOPA and PIPA). I can't even begin to pretend that blacking out my humble little site would accomplish anything, but since a lot of my personal and professional livelihood depends on the internet, I suppose I can't ignore this either.
For the uninitiated, if the bills known as SOPA and PIPA become law, many websites could be taken offline involuntarily, without warning, and without due process of law, based on little more than an alleged copyright owner's unproven and uncontested allegations of infringement1. The reason Wikipedia is blacked out today is that they depend solely on user-contributed content, which means they would be a ripe target for overzealous copyright holders. Sites like Google haven't blacked themselves out, but have staged a bit of a protest as well, because under the provisions of the bill, even just linking to a site that infringes upon copyright is grounds for action (and thus search engines have a vested interest in defeating these bills). You could argue that these bills are well intentioned, and from what I can tell, their original purpose seemed to be more about foreign websites and DNS, but the road to hell is paved with good intentions and as written, these bills are completely absurd.
Lots of other sites have been registering their feelings on the matter. ArsTechnica has been posting up a storm. Shamus has a good post on the subject which is followed by a lively comment thread. But I think Aziz hits the nail on the head:
Looks like the DNS provisions in SOPA are getting pulled, and the House is delaying action on the bill until February, so it’s gratifying to see that the activism had an effect. However, that activism would have been put to better use to educate people about why DRM is harmful, why piracy should be fought not with law but with smarter pro-consumer marketing by content owners (lowered prices, more options for digital distribution, removal of DRM, fair use, and ubiquitous time-shifting). Look at the ridiculous limitations on Hulu Plus - even if you’re a paid subscriber, some shows won’t air episodes until the week after, old episodes are not always available, some episodes can only be watched on the computer and are restricted from mobile devices. These are utterly arbitrary limitations on watching content that just drive people into the pirates’ arms.I may disagree with some of the other things in Aziz's post, but the above paragraph is important, and for some reason, people aren't talking about this aspect of the story. Sure, some folks are disputing the numbers, but few are pointing out the things that IP owners could be doing instead of legislation. For my money, the most important thing that IP owners have forgotten is convenience. Aziz points out Hulu, which is one of the worst services I've ever seen in terms of being convenient or even just intuitive to customers. I understand that piracy is frustrating for content owners and artists, but this is not the way to fight piracy. It might be disheartening to acknowledge that piracy will always exist, but it probably will, so we're going to have to figure out a way to deal with it. The one thing we've seen work is convenience. Despite the fact that iTunes had DRM, it was loose enough and convenient enough that it became a massive success (it now doesn't have DRM, which is even better). People want to spend money on this stuff, but more often than not, content owners are making it harder on the paying customer than on the pirate. SOPA/PIPA is just the latest example of this sort of thing.
I've already written about my thoughts on Intellectual Property, Copyright and DRM, so I encourage you to check that out. And if you're so inclined, you can find out what senators and representatives are supporting these bills, and throw them out in November (or in a few years, if need be). I also try to support companies or individuals that put out DRM-free content (for example, Louis CK's latest concert video has been made available, DRM free, and has apparently been a success).
Intellectual Property and Copyright is a big subject, and I have to be honest in that I don't have all the answers. But the way it works right now just doesn't seem right. A copyrighted work released just before I was born (i.e. Star Wars) probably won't enter the public domain until after I'm dead (I'm generally an optimistic guy, so I won't complain if I do make it to 2072, but still). Both protection and expiration are important parts of the way copyright works in the U.S. It's a balancing act, to be sure, but I think the pendulum has swung too far in one direction. Maybe it's time we swing it back. Now if you'll excuse me, I'm going to participate in a different kind of blackout to protest SOPA.
1 - Thanks to James for the concise description. There are lots of much longer longer and better sourced descriptions of the shortcomings of this bill and the issues surrounding it, so I won't belabor the point here.
Posted by Mark on January 18, 2012 at 06:20 PM .: link :.
Sunday, May 22, 2011
About two years ago (has it really been that long!?), I wrote a post about Interrupts and Context Switching. As long and ponderous as that post was, it was actually meant to be part of a larger series of posts. This post is meant to be the continuation of that original post and hopefully, I'll be able to get through the rest of the series in relatively short order (instead of dithering for another couple years). While I'm busy providing context, I should also note that this series was also planned for my internal work blog, but in the spirit of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Obviously, some of the specifics of my workplace have been removed from what follows, but it should still contain enough general value to be worthwhile.
In the previous post, I wrote about how computers and humans process information and in particular, how they handle switching between multiple different tasks. It turns out that computers are much better at switching tasks than humans are (for reasons belabored in that post). When humans want to do something that requires a lot of concentration and attention, such as computer programming or complex writing, they tend to work best when they have large amounts of uninterrupted time and can work in an environment that is quiet and free of distractions. Unfortunately, such environments can be difficult to find. As such, I thought it might be worth examining the source of most interruptions and distractions: communication.
Of course, this is a massive subject that can't even be summarized in something as trivial as a blog post (even one as long and bloviated as this one is turning out to be). That being said, it's worth examining in more detail because most interruptions we face are either directly or indirectly attributable to communication. In short, communication forces us to do context switching, which, as we've already established, is bad for getting things done.
Let's say that you're working on something large and complex. You've managed to get started and have reached a mental state that psychologists refer to as flow (also colloquially known as being "in the zone"). Flow is basically a condition of deep concentration and immersion. When you're in this state, you feel energized and often don't even recognize the passage of time. Seemingly difficult tasks no longer feel like they require much effort and the work just kinda... flows. Then someone stops by your desk to ask you an unrelated question. As a nice person and an accomodating coworker, you stop what you're doing, listen to the question and hopefully provide a helpful answer. This isn't necessarily a bad thing (we all enjoy helping other people out from time to time) but it also represents a series of context switches that would most likely break you out of your flow.
Not all work requires you to reach a state of flow in order to be productive, but for anyone involved in complex tasks like engineering, computer programming, design, or in-depth writing, flow is a necessity. Unfortunately, flow is somewhat fragile. It doesn't happen instantaneously; it requires a transition period where you refamiliarize yourself with the task at hand and the myriad issues and variables you need to consider. When your collegue departs and you can turn your attention back to the task at hand, you'll need to spend some time getting your brain back up to speed.
In isolation, the kind of interruption described above might still be alright every now and again, but imagine if the above scenario happened a couple dozen times in a day. If you're supposed to be working on something complicated, such a series of distractions would be disasterous. Unfortunately, I work for a 24/7 retail company and the nature of our business sometimes requires frequen interruptions and thus there are times when I am in a near constant state of context switching. Noe of this is to say I'm not part of the problem. I am certainly guilty of interrupting others, sometimes frequently, when I need some urgent information. This makes working on particularly complicated problems extremely difficult.
In the above example, there are only two people involved: you and the person asking you a question. However, in most workplace environments, that situation indirectly impacts the people around you as well. If they're immersed in their work, an unrelated conversation two cubes down may still break them out of their flow and slow their progress. This isn't nearly as bad as some workplaces that have a public address system - basically a way to interrupt hundreds or even thousands of people in order to reach one person - but it does still represent a challenge.
Now, the really insideous part about all this is that communication is really a good thing, a necessary thing. In a large scale organization, no one person can know everything, so communication is unavoidable. Meetings and phone calls can be indispensible sources of information and enablers of collaboration. The trick is to do this sort of thing in a way that interrupts as few people as possible. In some cases, this will be impossible. For example, urgency often forces disruptive communication (because you cannot afford to wait for an answer, you will need to be more intrusive). In other cases, there are ways to minimize the impact of frequent communication.
One way to minimize communication is to have frequently requested information documented in a common repository, so that if someone has a question, they can find it there instead of interrupting you (and potentially those around you). Naturally, this isn't quite as effective as we'd like, mostly because documenting information is a difficult and time consuming task in itself and one that often gets left out due to busy schedules and tight timelines. It turns out that documentation is hard! A while ago, Shamus wrote a terrific rant about technical documentation:
The stereotype is that technical people are bad at writing documentation. Technical people are supposedly inept at organizing information, bad at translating technical concepts into plain English, and useless at intuiting what the audience needs to know. There is a reason for this stereotype. It’s completely true.I don't think it's quite as bad as Shamus points out, mostly because I think that most people suffer from the same issues as technical people. Technology tends to be complex and difficult to explain in the first place, so it's just more obvious there. Technology is also incredibly useful because it abstracts many difficult tasks, often through the use of metaphors. But when a user experiences the inevitable metaphor shear, they have to confront how the system really works, not the easy abstraction they've been using. This descent into technical details will almost always be a painful one, no matter how well documented something is, which is part of why documentation gets short shrift. I think the fact that there actually is documentation is usually a rather good sign. Then again, lots of things aren't documented at all.
There are numerous challenges for a documentation system. It takes resources, time, and motivation to write. It can become stale and inaccurate (sometimes this can happen very quickly) and thus it requires a good amount of maintenance (this can involve numerous other topics, such as version histories, automated alert systems, etc...). It has to be stored somewhere, and thus people have to know where and how to find it. And finally, the system for building, storing, maintaining, and using documentation has to be easy to learn and easy to use. This sounds all well and good, but in practice, it's a nonesuch beast. I don't want to get too carried away talking about documentation, so I'll leave it at that (if you're still interested, that nonesuch beast article is quite good). Ultimately, documentation is a good thing, but it's obviously not the only way to minimize communication strain.
I've previously mentioned that computer programming is one of those tasks that require a lot of concentration. As such, most programmers abhor interruptions. Interestingly, communication technology has been becoming more and more reliant on software. As such, it should be no surprise that a lot of new tools for communication are asynchronous, meaning that the exchange of information happens at each participant's own convenience. Email, for example, is asynchronous. You send an email to me. I choose when I want to review my messages and I also choose when I want to respond. Theoretically, email does not interrupt me (unless I use automated alerts for new email, such as the default Outlook behavior) and thus I can continue to work, uninterrupted.
The aformentioned documentation system is also a form of asynchronous communication and indeed, most of the internet itself could be considered a form of documentation. Even the communication tools used on the web are mostly asynchronous. Twitter, Facebook, YouTube, Flickr, blogs, message boards/forums, RSS and aggregators are all reliant on asynchronous communication. Mobile phones are obviously very popular, but I bet that SMS texting (which is asynchronous) is used just as much as voice, if not moreso (at least, for younger people). The only major communication tools invented in the past few decades that wouldn't be asynchronous are instant messaging and chat clients. And even those systems are often used in a more asynchronous way than traditional speech or conversation. (I suppose web conferencing is a relatively new communication tool, though it's really just an extension of conference calls.)
The benefit of asynchronous communication is, of course, that it doesn't (or at least it shouldn't) represent an interruption. If you're immersed in a particular task, you don't have to stop what you're doing to respond to an incoming communication request. You can deal with it at your own convenience. Furthermore, such correspondence (even in a supposedly short-lived medium like email) is usually stored for later reference. Such records are certainly valuable resources. Unfortunately, asynchronous communication has it's own set of difficulties as well.
Miscommunication is certainly a danger in any case, but it seems more prominent in the world of asynchronous communication. Since there is no easy back-and-forth in such a method, there is no room for clarification and one is often left only with their own interpretation. Miscommunication is doubly challenging because it creates an ongoing problem. What could have been a single conversation has now ballooned into several asynchronous touch-points and even the potential for wasted work.
One of my favorite quotations is from Anne Morrow Lindbergh:
To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!It's difficult to beat the endless nuance of face-to-face communication, and for some discussions, nothing else will do. But as Lindbergh notes, communication is, in itself, a difficult proposition. Difficult, but necessary. About the best we can do is to attempt to minimize the misunderstanding.
I suppose one way to mitigate the possibility of miscommunication is to formalize the language in which the discussion is happening. This is easier said than done, as our friends in the legal department would no doubt say. Take a close look at a formal legal contract and you can clearly see the flaws in formal language. They are ostensibly written in English, but they require a lot of effort to compose or to read. Even then, opportunities for miscommunication or loopholes exist. Such a process makes sense when dealing with two separate organizations that each have their own agenda. But for internal collaboration purposes, such a formalization of communication would be disastrous.
You could consider computer languages a form of formal communication, but for most practical purposes, this would also fall short of a meaningful method of communication. At least, with other humans. The point of a computer language is to convert human thought into computational instructions that can be carried out in an almost mechanical fashion. While such a language is indeed very formal, it is also tedious, unintuitive, and difficult to compose and read. Our brains just don't work like that. Not to mention the fact that most of the communication efforts I'm talking about are the precursors to the writing of a computer program!
Despite all of this, a light formalization can be helpful and the fact that teams are required to produce important documentation practically requires a compromise between informal and formal methods of communication. In requirements specifications, for instance, I have found it quite beneficial to formally define various systems, acronyms, and other jargon that is referenced later in the document. This allows for a certain consistency within the document itself, and it also helps establish guidelines surrounding meaningful dialogue outside of the document. Of course, it wouldn't quite be up to legal standards and it would certainly lack the rigid syntax of computer languages, but it can still be helpful.
I am not an expert in linguistics, but it seems to me that spoken language is much richer and more complex than written language. Spoken language features numerous intricacies and tonal subtleties such as inflections and pauses. Indeed, spoken language often contains its own set of grammatical patterns which can be different than written language. Furthermore, face-to-face communication also consists of body language and other signs that can influence the meaning of what is said depending on the context in which it is spoken. This sort of nuance just isn't possible in written form.
This actually illustrates a wider problem. Again, I'm no linguist and haven't spent a ton of time examining the origins of language, but it seems to me that language emerged as a more immediate form of communication than what we use it for today. In other words, language was meant to be ephemeral, but with the advent of written language and improved technological means for recording communication (which are, historically, relatively recent developments), we're treating it differently. What was meant to be short-lived and transitory is now enduring and long-lived. As a result, we get things like the ever changing concept of political-correctness. Or, more relevant to this discussion, we get the aforementioned compromise between formal and informal language.
Another drawback to asynchronous communication is the propensity for over-communication. The CC field in an email can be a dangerous thing. It's very easy to broadcast your work out to many people, but the more this happens, the more difficult it becomes to keep track of all the incoming stimuli. Also, the language used in such a communication may be optimized for one type of reader, while the audience may be more general. This applies to other asynchronous methods as well. Documentation in a wiki is infamously difficult to categorize and find later. When you have an army of volunteers (as Wikipedia does), it's not as large a problem. But most organizations don't have such luxuries. Indeed, we're usually lucky if something is documented at all, let alone well organized and optimized.
The obvious question, which I've skipped over for most of this post (and, for that matter, the previous post), is: why communicate in the first place? If there are so many difficulties that arise out of communication, why not minimize such frivolities so that we can get something done?
Indeed, many of the greatest works in history were created by one mind. Sometimes, two. If I were to ask you to name the greatest inventor of all time, what would you say? Leonardo da Vinci or perhaps Thomas Edison. Both had workshops consisting of many helping hands, but their greatest ideas and conceptual integrity came from one man. Great works of literature? Shakespeare is the clear choice. Music? Bach, Mozart, Beethoven. Painting? da Vinci (again!), Rembrandt, Michelangelo. All individuals! There are collaborations as well, but usually only among two people. The Wright brothers, Gilbert and Sullivan, and so on.
So why has design and invention gone from solo efforts to group efforts? Why do we know the names of most of the inventors of 19th and early 20th century innovations, but not later achievements? For instance, who designed the Saturn V rocket? No one knows that, because it was a large team of people (and it was the culmination of numerous predecessors made by other teams of people). Why is that?
The biggest and most obvious answer is the increasing technological sophistication in nearly every area of engineering. The infamous Lazarus Long adage that "Specialization is for insects." notwithstanding, the amount of effort and specialization in various fields is astounding. Take a relatively obscure and narrow branch of mechanical engineering like Fluid Dynamics, and you'll find people devoting most of their life to the study of that field. Furthermore, the applications of that field go far beyond what we'd assume. Someone tinkering in their garage couldn't make the Saturn V alone. They'd require too much expertise in a wide and disparate array of fields.
This isn't to say that someone tinkering in their garage can't create something wonderful. Indeed, that's where the first personal computer came from! And we certainly know the names of many innovators today. Mark Zuckerberg and Larry Page/Sergey Brin immediately come to mind... but even their inventions spawned large companies with massive teams driving future innovation and optimization. It turns out that scaling a product up often takes more effort and more people than expected. (More information about the pros and cons of moving to a collaborative structure will have to wait for a separate post.)
And with more people comes more communication. It's a necessity. You cannot collaborate without large amounts of communication. In Tom DeMarco and Timothy Lister's book Peopleware, they call this the High-Tech Illusion:
...the widely held conviction among people who deal with any aspect of new technology (as who of us does not?) that they are in an intrinsically high-tech business. ... The researchers who made fundamental breakthroughs in those areas are in a high-tech business. The rest of us are appliers of their work. We use computers and other new technology components to develop our products or to organize our affairs. Because we go about this work in teams and projects and other tightly knit working groups, we are mostly in the human communication business. Our successes stem from good human interactions by all participants in the effort, and our failures stem from poor human interactions.(Emphasis mine.) That insight is part of what initially inspired this series of posts. It's very astute, and most organizations work along those lines, and thus need to figure out ways to account for the additional costs of communication (this is particularly daunting, as such things are notoriously difficult to measure, but I'm getting ahead of myself). I suppose you could argue that both of these posts are somewhat inconclusive. Some of that is because they are part of a larger series, but also, as I've been known to say, human beings don't so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Recognizing and acknowledging the problems introduced by collaboration and communication is vital to working on any large project. As I mentioned towards the beginning of this post, this only really scratches the surface of the subject of communication, but for the purposes of this series, I think I've blathered on long enough. My next topic in this series will probably cover the various difficulties of providing estimates. I'm hoping the groundwork laid in these first two posts will mean that the next post won't be quite so long, but you never know!
Posted by Mark on May 22, 2011 at 07:51 PM .: link :.
Sunday, April 03, 2011
So the NY Times has an article debating the necessity of the various gadgets. The argument here is that we're seeing a lot of convergence in tech devices, and that many technologies that once warranted a dedicated device are now covered by something else. Let's take a look at their devices, what they said, and what I think:
"It has been my view for some years that a new System of the World is being created around us. I used to suppose that it would drive out and annihilate any older Systems. But things I have seen recently ... have convinced me that new Systems never replace old ones, but only surround and encapsulate them, even as, under a microscope, we may see that living within our bodies are animalcules, smaller and simpler than us, and yet thriving even as we thrive. ... And so I say that Alchemy shall not vanish, as I always hoped. Rather, it shall be encapsulated within the new System of the World, and become a familiar and even comforting presence there, though its name may change and its practitioners speak no more about the Philosopher's Stone." (page 639)That sort of "surround and encapsulate" concept seems broadly applicable to a lot of technology, actually.
Posted by Mark on April 03, 2011 at 07:42 PM .: link :.
Wednesday, March 30, 2011
Nicholas Carr cracks me up. He's a skeptic of technology, and in particular, the internet. He's the the guy who wrote the wonderfully divisive article, Is Google Making Us Stupid? The funny thing about all this is that he seems to have gained the most traction on the very platform he criticizes so much. Ultimately, though, I think he does have valuable insights and, if nothing else, he does raise very interesting questions about the impacts of technology on our lives. He makes an interesting counterweight to the techno-geeks who are busy preaching about transhumanism and the singularity. Of course, in a very real sense, his opposition dooms him to suffer from the same problems as those he criticizes. Google and the internet may not be a direct line to godhood, but it doesn't represent a descent into hell either. Still, reading some Carr is probably a good way to put techno-evangelism into perspective and perhaps reach some sort of Hegelian synthesis of what's really going on.
Otakun recently pointed to an excerpt from Carr's latest book. The general point of the article is to examine how human memory is being conflated with computer memory, and whether or not that makes sense:
...by the middle of the twentieth century memorization itself had begun to fall from favor. Progressive educators banished the practice from classrooms, dismissing it as a vestige of a less enlightened time. What had long been viewed as a stimulus for personal insight and creativity came to be seen as a barrier to imagination and then simply as a waste of mental energy. The introduction of new storage and recording media throughout the last century—audiotapes, videotapes, microfilm and microfiche, photocopiers, calculators, computer drives—greatly expanded the scope and availability of “artificial memory.” Committing information to one’s own mind seemed ever less essential. The arrival of the limitless and easily searchable data banks of the Internet brought a further shift, not just in the way we view memorization but in the way we view memory itself. The Net quickly came to be seen as a replacement for, rather than just a supplement to, personal memory. Today, people routinely talk about artificial memory as though it’s indistinguishable from biological memory.While Carr is perhaps more blunt than I would be, I have to admit that I agree with a lot of what he's saying here. We often hear about how modern education is improved by focusing on things like "thinking skills" and "problem solving", but the big problem with emphasizing that sort of work ahead of memorization is that the analysis needed for such processes require a base level of knowledge in order to be effective. This is something I've expounded on at length in a previous post, so I won't rehash that here.
The interesting thing about the internet is that it enables you to get to a certain base level of knowledge and competence very quickly. This doesn't come without it's own set of challenges, and I'm sure Carr would be quick to point out that such a crash course would yield a false sense of security on us hapless internet users. After all, how do we know when we've reached that base level of confidence? Our incompetence could very well be masking our ability to recognize our incompetence. However, I don't think that's an insurmountable problem. Most of us that use the internet a lot view it as something of a low-trust environment, which can, ironically, lead to a better result. On a personal level, I find that what the internet really helps with is to determine just how much I don't know about a subject. That might seem like a silly thing to say, but even recognizing that your unknown unknowns are large can be helpful.
Some other assorted thoughts about Carr's excerpt:
Posted by Mark on March 30, 2011 at 06:06 PM .: link :.
Wednesday, August 04, 2010
A/B Testing Spaghetti Sauce
Earlier this week I was perusing some TED Talks and ran across this old (and apparently popular) presentation by Malcolm Gladwell. It struck me as particularly relevant to several topics I've explored on this blog, including Sunday's post on the merits of A/B testing. In the video, Gladwell explains why there are a billion different varieties of Spaghetti sauce at most supermarkets:
The key insight Gladwell discusses in his video is basically the destruction of the Platonic Ideal (I'll summarize in this paragraph in case you didn't watch the video, which covers the topic in much more depth). He talks about Howard Moskowitz, who was a market research consultant with various food industry companies that were attempting to optimize their products. After conducting lots of market research and puzzling over the results, Moskowitz eventually came to a startling conclusion: there is no perfect product, only perfect products. Moskowitz made his name working with spaghetti sauce. Prego had hired him in order to find the perfect spaghetti sauce (so that they could compete with rival company, Ragu). Moskowitz developed dozens of prototype sauces and went on the road, testing each variety with all sorts of people. What he found was that there was no single perfect spaghetti sauce, but there were basically three types of sauce that people responded to in roughly equal proportion: standard, spicy, and chunky. At the time, there were no chunky spaghetti sauces on the market, so when Prego released their chunky spaghetti sauce, their sales skyrocketed. A full third of the market was underserved, and Prego filled that need.
Decades later, this is hardly news to us and the trend has spread from the supermarket into all sorts of other arenas. In entertainment, for example, we're seeing a move towards niches. The era of huge blockbuster bands like The Beatles is coming to an end. Of course, there will always be blockbusters, but the really interesting stuff is happening in the niches. This is, in part, due to technology. Once you can fit 30,000 songs onto an iPod and you can download "free" music all over the internet, it becomes much easier to find music that fits your tastes better. Indeed, this becomes a part of peoples' identity. Instead of listening to the mass produced stuff, they listen to something a little odd and it becomes an expression of their personality. You can see evidence of this everywhere, and the internet is a huge enabler in this respect. The internet is the land of niches. Click around for a few minutes and you can easily find absurdly specific, single topic, niche websites like this one where every post features animals wielding lightsabers or this other one that's all about Flaming Garbage Cans In Hip Hop Videos (there are thousands, if not millions of these types of sites). The internet is the ultimate paradox of choice, and you're free to explore almost anything you desire, no matter how odd or obscure it may be (see also, Rule 34).
In relation to Sunday's post on A/B testing, the lesson here is that A/B testing is an optimization tool that allows you to see how various segments respond to different versions of something. In that post, I used an example where an internet retailer was attempting to find the ideal imagery to sell a diamond ring. A common debate in the retail world is whether that image should just show a closeup of the product, or if it should show a model wearing the product. One way to solve that problem is to A/B test it - create both versions of the image, segment visitors to your site, and track the results.
As discussed Sunday, there are a number of challenges with this approach, but one thing I didn't mention is the unspoken assumption that there actually is an ideal image. In reality, there are probably some people that prefer the closeup and some people who prefer the model shot. An A/B test will tell you what the majority of people like, but wouldn't it be even better if you could personalize the imagery used on the site depending on what customers like? Show the type of image people prefer, and instead of catering to the most popular segment of customer, you cater to all customers (the simple diamond ring example begins to break down at this point, but more complex or subtle tests could still show significant results when personalized). Of course, this is easier said than done - just ask Amazon, who does CRM and personalization as well as any retailer on the web, and yet manages to alienate a large portion of their customers every day! Interestingly, this really just shifts the purpose of A/B testing from one of finding the platonic ideal to finding a set of ideals that can be applied to various customer segments. Once again we run up against the need for more and better data aggregation and analysis techniques. Progress is being made, but I'm not sure what the endgame looks like here. I suppose time will tell. For now, I'm just happy that Amazon's recommendations aren't completely absurd for me at this point (which I find rather amazing, considering where they were a few years ago).
Posted by Mark on August 04, 2010 at 07:54 PM .: link :.
Sunday, July 04, 2010
Noted documentary filmmaker Errol Morris has been writing a series of posts about incompetence for the NY Times. The most interesting parts feature an interview with David Dunning, a psychologist whose experiments have discovered what's called the Dunning-Kruger Effect: our incompetence masks our ability to recognize our incompetence.
DAVID DUNNING: There have been many psychological studies that tell us what we see and what we hear is shaped by our preferences, our wishes, our fears, our desires and so forth. We literally see the world the way we want to see it. But the Dunning-Kruger effect suggests that there is a problem beyond that. Even if you are just the most honest, impartial person that you could be, you would still have a problem — namely, when your knowledge or expertise is imperfect, you really don’t know it. Left to your own devices, you just don’t know it. We’re not very good at knowing what we don’t know.I found this interesting in light of my recent posting about universally self-affirming outlooks (i.e. seeing the world the way we want to see it). In any case, the interview continues:
ERROL MORRIS: Knowing what you don’t know? Is this supposedly the hallmark of an intelligent person?It may be smart and modest, but that sort of thing usually gets politicians in trouble. But most people aren't politicians, and so it's worth looking into this concept a little further. An interesting result of this effect is that a lot of the smartest, most intelligent people also tend to be somewhat modest (this isn't to say that they don't have an ego or that they can't act in arrogant ways, just that they tend to have a better idea about how much they don't know). Steve Schwartz has an essay called No One Knows What the F*** They’re Doing (or “The 3 Types of Knowledge”) that explores these ideas in some detail:
To really understand how it is that no one knows what they’re doing, we need to understand the three fundamental categories of information.Schwartz has a series of very helpful charts that illustrate this, but most people drastically overestimate the amount of knowledge in the "shit you know" category. In fact, that's the smallest category and it is dwarfed b the shit you know you don’t know category, which is, in itself, dwarfed by the shit you don’t know you don’t know. The result is that most people who receive a lot of praise or recognition are surprised and feel a bit like a fraud.
This is hardly a new concept, but it's always worth keeping in mind. When we learn something new, we've gained some knowledge. We've put some information into the "shit we know" category. But more importantly, we've probably also taken something out of the "shit we don't know that we don't know" category and put it into the "shit we know that we don't know" category. This is important because that unknown unknowns category is the most dangerous of the categories, not the least of which is that our ignorance prevents us from really exploring it. As mentioned at the beginning of this post, our incompetence masks our ability to recognize our incompetence. In the interview, Morris references a short film he did once:
ERROL MORRIS: And I have an interview with the president of the Alcor Life Extension Foundation, a cryonics organization, on the 6 o’clock news in Riverside, California. One of the executives of the company had frozen his mother’s head for future resuscitation. (It’s called a “neuro,” as opposed to a “full-body” freezing.) The prosecutor claimed that they may not have waited for her to die. In answer to a reporter’s question, the president of the Alcor Life Extension Foundation said, “You know, we’re not stupid . . . ” And then corrected himself almost immediately, “We’re not that stupid that we would do something like that.”One might be tempted to call this a cynical outlook, but what it basically amounts to is that there's always something new to learn. Indeed, the more we learn, the more there is to learn. Now, if only we could invent the technology like what's presented in Diaspora (from my previous post), so we can live long enough to really learn a lot about the universe around us...
Posted by Mark on July 04, 2010 at 07:42 PM .: link :.
Wednesday, June 23, 2010
Internalizing the Ancient
Otaku Kun points to a wonderful entry in the Astronomy Picture of the Day series:
I think it’s impossible to really relate to things beyond human timescales. The idea of something being “ancient” has no meaning if it predates our human comprehension. The Neanderthals disappeared 30,000 years ago, which is probably really the farthest back we can reflect on. When we start talking about human forebears of 100,000 years ago and more, it becomes more abstract - that’s why it’s no coincidence that the Battlestar Galactica series finale set the events 150,000 years ago, well beyond even the reach of mythological narrative.I'm reminded of an essay by C. Northcote Parkinson, called High Finance or The Point of Vanishing Interest (the essay appears in Parkinson's Law, a collection of essays). Parkinson writes about how finance committees work:
People who understand high finance are of two kinds: those who have vast fortunes of their own and those who have nothing at all. To the actual millionaire a million dollars is something real and comprehensible. To the applied mathematician and the lecturer in economics (assuming both to be practically starving) a million dollars is at least as real as a thousand, they having never possessed either sum. But the world is full of people who fall between these two categories, knowing nothing of millions but well accustomed to think in thousands, and it is these that finance committees are mostly comprised.He then goes on to explore what he calls the "Law of Triviality". Briefly stated, it means that the time spent on any item of the agenda will be in inverse proportion to the sum involved. Thus he concludes, after a number of humorous but fitting examples, that there is a point of vanishing interest where the committee can no longer comment with authority. Astonishingly, the amount of time that is spent on $10 million and on $10 may well be the same. There is clearly a space of time which suffices equally for the largest and smallest sums.
In short, it's difficult to internalize numbers that high, whether we're talking about large sums of money or cosmic timescales. Indeed, I'd even say that Parkinson was being a bit optimistic. Millionaires and mathematicians may have a better grasp on the situation than most, but even they are probably at a loss when we start talking about cosmic timeframes. OK also mentions Battlestar Galactica, which did end on an interesting note (even if that finale was quite disappointing as a whole) and which brings me to one of the reasons I really enjoy science fiction: the contemplation of concepts and ideas that are beyond comprehension. I can't really internalize the cosmic information encoded in the universe around me in such a way to do anything useful with it, but I can contemplate it and struggle to understand it, which is interesting and valuable in its own right. Perhaps someday, we will be able to devise ways to internalize and process information on a cosmic scale (this sort of optimistic statement perhaps represents another reason I enjoy SF).
Posted by Mark on June 23, 2010 at 08:30 PM .: link :.
Sunday, May 30, 2010
Someone sent me a note about a post I wrote on the 4th Kingdom boards in 2005 (August 3, 2005, to be more precise). It was in a response to a thread about technology and consumer electronics trends, and the original poster gave two examples that were exploding at the times: "camera phones and iPods." This is what I wrote in response:
Heh, I think the next big thing will be the iPod camera phone. Or, on a more general level, mp3 player phones. There are already some nifty looking mp3 phones, most notably the Sony/Ericsson "Walkman" branded phones (most of which are not available here just yet). Current models are all based on flash memory, but it can't be long before someone releases something with a small hard drive (a la the iPod). I suspect that, in about a year, I'll be able to hit 3 birds with one stone and buy a new cell phone with an mp3 player and digital camera.For an off-the-cuff informal response, I think I did pretty well. Of course, I still got a lot of the specifics wrong. For instance, I'm pretty clearly talking about the iPhone, though that would have to wait about 2 years before it became a reality. I also didn't anticipate the expansion of flash memory to more usable sizes and prices. Though I was clearly talking about a convergence device, I didn't really say anything about what we now call "apps".
In terms of game consoles, I didn't really say much. My first thought upon reading this post was that I had completely missed the boat on the Wii, however, it appears that the Wii's new controller scheme wasn't shown until September 2005 (about a month after my post). I did manage to predict a winner in the HD video war though, even if I framed the prediction as a "high capacity DVD war" and spelled blu-ray wrong.
I'm not generally good at making predictions about this sort of thing, but it's nice to see when I do get things right. Of course, you could make the argument that I was just stating the obvious (which is basically what I did with my 2008 predictions). Then again, everything seems obvious in hindsight, so perhaps it is still a worthwhile exercise for me. If nothing else, it gets me to think in ways I'm not really used to... so here are a few predictions for the rest of this year:
Posted by Mark on May 30, 2010 at 09:00 PM .: link :.
Sunday, March 14, 2010
Remix Culture and Soviet Montage Theory
A video mashup of The Beastie Boys' popular and amusing Sabotage video with scenes from Battlestar Galactica has been making the rounds recently. It's well done, but a little on the disposable side of remix culture. The video lead Sunny Bunch to question "remix culture":
It’s quite good. But, ultimately, what’s the point?These are good questions, and I'm not surprised that the BSG Sabotage video prompted them. The implication of Sonny's post is that he thinks it is an unoriginal waste of talent (he may be playing a bit of devil's advocate here, but I'm willing to play along because these are interesting questions and because it will give me a chance to pedantically lecture about film history later in this post!) In the comments, Julian Sanchez makes a good point (based on a video he produced earlier that was referenced by someone else in the comment thread), which will be something I'll expand on later in this post:
First, the argument I’m making in that video is precisely that exclusive focus on the originality of the contribution misses the value in the activity itself. The vast majority of individual and collective cultural creation practiced by ordinary people is minimally “original” and unlikely to yield any final product of wide appeal or enduring value. I’m thinking of, e.g., people singing karaoke, playing in a garage band, drawing, building models, making silly YouTube videos, improvising freestyle poetry, whatever. What I’m positing is that there’s an intrinsic value to having a culture where people don’t simply get together to consume professionally produced songs and movies, but also routinely participate in cultural creation. And the value of that kind of cultural practice doesn’t depend on the stuff they create being particularly awe-inspiring.To which Sonny responds:
I’m actually entirely with you on the skill that it takes to produce a video like the Brooklyn hipsters did — I have no talent for lighting, camera movements, etc. I know how hard it is to edit together something like that, let alone shoot it in an aesthetically pleasing manner. That’s one of the reasons I find the final product so depressing, however: An impressive amount of skill and talent has gone into creating something that is not just unoriginal but, in a way, anti-original. These are people who are so devoid of originality that they define themselves not only by copying a video that they’ve seen before but by copying the very personalities of characters that they’ve seen before.Another good point, but I think Sonny is missing something here. The talents of the BSG Sabotage editor or the Brooklyn hipsters are certainly admirable, but while we can speculate, we don't necessarily know their motivations. About 10 years ago, a friend and amateur filmmaker showed me a video one of his friends had produced. It took scenes from Star Wars and Star Trek: The Wrath of Khan and recut them so it looked like the Millennium Falcon was fighting the Enterprise. It would show Han Solo shooting, then cut to the Enterprise being hit. Shatner would exclaim "Fire!" and then it would cut to a blast hitting the Millennium Falcon. And so on. Another video from the same guy took the musical number George Lucas had added to Return of the Jedi in the Special Edition, laid Wu-Tang Clan in as the soundtrack, then re-edited the video elements so everything matched up.
These videos sound fun, but not particularly original or even special in this day and age. However, these videos were made ten to fifteen years ago. I was watching them on a VHS(!) and the person making the edits was using analog techniques and equipment. It turns out that these videos were how he honed his craft before he officially got a job as an editor in Hollywood. I'm sure there were tons of other videos, probably much less impressive, that he had created before the ones I'm referencing. Now, I'm not saying that the BSG Sabotage editor or the Brooklyn Hipsters are angling for professional filmmaking jobs, but it's quite possible that they are at least exploring their own possibilities. I would also bet that these people have been making videos like this (though probably much less sophisticated) since they were kids. The only big difference now is that technology has enabled them to make a slicker experience and, more importantly, to distribute it widely.
It's also worth noting that this sort of thing is not without historical precedent. Indeed, the history of editing and montage is filled with this sort of thing. In the 1910s and 1920s, Russian filmmaker Lev Kuleshov conducted a series of famous experiments that helped express the role of editing in films. In these experiments, he would show a man with an expressionless face, then cut to various other shots. In one example, he showed the expressionless face, then cut to a bowl of soup. When prompted, audiences would claim that they found that the man was hungry. Kuleshov then took the exact same footage of the expressionless face and cut to a pretty girl. This time, audiences reported that the man was in love. Another experiment alternated between the expressionless face and a coffin, a juxtaposition that lead audiences to believe that the man was stricken with grief. This phenomenon has become known as the Kuleshov Effect.
For the current discussion, one notable aspect of these experiments is that Kuleshov was working entirely from pre-existing material. And this sort of thing was not uncommon, either. At the time, there was a shortage of raw film stock in Russia. Filmmakers had to make due with what they had, and often spent their time re-cutting existing material, which lead to what's now called Soviet Montage Theory. When D.W. Griffith's Intolerance, which used advanced editing techniques (it featured a series of cross cut narratives which eventually converged in the last reel), opened in Russia in 1919, it quickly became very popular. The Russian film community saw this as a validation and popularization of their theories and also as an opportunity. Russian critics and filmmakers were impressed by the film's technical qualities, but dismissed the story as "bourgeois", claiming that it failed to resolve issues of class conflict, and so on. So, not having much raw film stock of their own, they took to playing with Griffith's film, re-editing certain sections of the film to make it more "agitational" and revolutionary.
The extent to which this happened is a bit unclear, and certainly public exhibitions were not as dramatically altered as I'm making it out to be. However, there are Soviet versions of the movie that contained small edits and a newly filmed prologue. This was done to "sharpen the class conflict" and "anti-exploitation" aspects of the film, while still attempting to respect the author's original intentions. This was part of a larger trend of adding Soviet propaganda to pre-existing works of art, and given the ideals of socialism, it makes sense. (The preceeding is a simplification of history, of course... see this chapter from Inside the Film Factory for a more detailed discussion of Intolerance and it's impact on Russian cinema.) In the Russian film world, things really began to take off with Sergei Eisenstein and films like Battleship Potemkin. Watch that film today, and you'll be struck by how modern-feeling the editing is, especially during the infamous Odessa Steps sequence (which you'll also recognize if you've ever seen Brian De Palma's "homage" in The Untouchables).
Now, I'm not really suggesting that the woman who produced BSG Sabotage is going to be the next Eisenstein, merely that the act of cutting together pre-existing footage is not necessarily a sad waste of talent. I've drastically simplified the history of Soviet Montage Theory above, but there are parallels between Soviet filmmakers then and YouTube videomakers today. Due to limited resources and knowledge, they began experimenting with pre-existing footage. They learned from the experience and went on to grander modifications of larger works of art (Griffith's Intolerance). This eventually culminated in original works of art, like those produced by Eisenstein.
Now, YouTube videomakers haven't quite made that expressive leap yet, but it's only been a few years. It's going to take time, and obviously editing and montage are already well established features of film, so innovation won't necessarily come from that direction. But that doesn't mean that nothing of value can emerge from this sort of thing, nor does messing around with videos on YouTube limit these young artists to film. While Roger Ebert's valid criticisms are vaid, more and more, I'm seeing interactivity as the unexplored territory of art. Video games like Heavy Rain are an interesting experience and hint at something along these lines, but they are still severely limited in many ways (in other words, Ebert is probably right when it comes to that game). It will take a lot of experimentation to get to a point where maybe Ebert would be wrong (if it's even possible at all). Learning about the visual medium of film by editing together videos of pre-existing material would be an essential step in the process. Improving the technology with which to do so is also an important step. And so on.
To return back to the BSG Sabotage video for a moment, I think that it's worth noting the origins of that video. The video is clearly having fun by juxtaposing different genres and mediums (it is by no means the best or even a great example of this sort of thing, but it's still there. For a better example of something built entirely from pre-existing works, see Shining.). Battlestar Galactica was a popular science fiction series, beloved by many, and this video comments on the series slightly by setting the whole thing to an unconventional music choice (though given the recent Star Trek reboot's use of the same song, I have to wonder what the deal is with SF and Sabotage). Ironically, even the "original" Beastie Boys video was nothing more than a pastiche of 70s cop television shows. While I'm no expert, the music on Ill Communication, in general, has a very 70s feel to it. I suppose you could say that association only exists because of the Sabotage video itself, but even other songs on that album have that feel - for one example, take Sabrosa. Indeed, the Beastie Boys are themselves known for this sort of appropriation of pre-existing work. Their album Paul's Boutique infamously contains literally hundreds of samples and remixes of popular music. I'm not sure how they got away with some of that stuff, but I suppose this happened before getting sued for sampling was common. Nowadays, in order to get away with something like Paul's Boutique, you'll need to have deep pockets, which sorta defeats the purpose of using a sample in the first place. After all, samples are used in the absence of resources, not just because of a lack of originality (though I guess that's part of it). In 2004 Nate Harrison put together this exceptional video explaining how a 6 second drum beat (known as the Amen Break) exploded into its own sub-culture:
There is certainly some repetition here, and maybe some lack of originality, but I don't find this sort of thing "sad". To be honest, I've never been a big fan of hip hop music, but I can't deny the impact it's had on our culture and all of our music. As I write this post, I'm listening to Danger Mouse's The Grey Album:
It uses an a cappella version of rapper Jay-Z's The Black Album and couples it with instrumentals created from a multitude of unauthorized samples from The Beatles' LP The Beatles (more commonly known as The White Album). The Grey Album gained notoriety due to the response by EMI in attempting to halt its distribution.I'm not familiar with Jay-Z's album and I'm probably less familiar with The White Album than I should be, but I have to admit that this combination and the artistry with which the two seemingly incompatible works are combined into one cohesive whole is impressive. Despite the lack of an official release (that would have made Danger Mouse money), The Grey Album made many best of the year (and best of the decade) lists. I see some parallels between the 1980s and 1990s use of samples, remixes, and mashups, and what was happening in Russian film in the 1910s and 1920s. There is a pattern worth noticing here: New technology enables artists to play with existing art, then apply their learnings to something more original later. Again, I don't think that the BSG Sabotage video is particularly groundbreaking, but that doesn't mean that the entire remix culture is worthless. I'm willing to bet that remix culture will eventually contribute towards something much more original than BSG Sabotage...
Incidentally, the director of the original Beastie Boys Sabotage video? Spike Jonze, who would go on to direct movies like Being John Malkovich, Adaptation., and Where the Wild Things Are. I think we'll see some parallels between the oft-maligned music video directors, who started to emerge in the film world in the 1990s, and YouTube videomakers. At some point in the near future, we're going to see film directors coming from the world of short-form internet videos. Will this be a good thing? I'm sure there are lots of people who hate the music video aesthetic in film, but it's hard to really be that upset that people like David Fincher and Spike Jonze are making movies these days. I doubt YouTubers will have a more popular style, and I don't think they'll be dominant or anything, but I think they will arrive. Or maybe YouTube videomakers will branch out into some other medium or create something entirely new (as I mentioned earlier, there's a lot of room for innovation in the interactive realm). In all honesty, I don't really know where remix culture is going, but maybe that's why I like it. I'm looking forward to seeing where it leads.
Posted by Mark on March 14, 2010 at 02:18 PM .: link :.
Sunday, June 28, 2009
Interrupts and Context Switching
To drastically simplify how computers work, you could say that computers do nothing more that shuffle bits (i.e. 1s and 0s) around. All computer data is based on these binary digits, which are represented in computers as voltages (5 V for a 1 and 0 V for a 0), and these voltages are physically manipulated through transistors, circuits, etc... When you get into the guts of a computer and start looking at how they work, it seems amazing how many operations it takes to do something simple, like addition or multiplication. Of course, computers have gotten a lot smaller and thus a lot faster, to the point where they can perform millions of these operations per second, so it still feels fast. The processor is performing these operations in a serial fashion - basically a single-file line of operations.
This single-file line could be quite inefficent and there are times when you want a computer to be processing many different things at once, rather than one thing at a time. For example, most computers rely on peripherals for input, but those peripherals are often much slower than the processor itself. For instance, when a program needs some data, it may have to read that data from the hard drive first. This may only take a few milliseconds, but the CPU would be idle during that time - quite inefficient. To improve efficiency, computers use multitasking. A CPU can still only be running one process at a time, but multitasking gets around that by scheduling which tasks will be running at any given time. The act of switching from one task to another is called Context Switching. Ironically, the act of context switching adds a fair amount of overhead to the computing process. To ensure that the original running program does not lose all its progress, the computer must first save the current state of the CPU in memory before switching to the new program. Later, when switching back to the original, the computer must load the state of the CPU from memory. Fortunately, this overhead is often offset by the efficiency gained with frequent context switches.
If you can do context switches frequently enough, the computer appears to be doing many things at once (even though the CPU is only processing a single task at any given time). Signaling the CPU to do a context switch is often accomplished with the use of a command called an Interrupt. For the most part, the computers we're all using are Interrupt driven, meaning that running processes are often interrupted by higher-priority requests, forcing context switches.
This might sound tedious to us, but computers are excellent at this sort of processing. They will do millions of operations per second, and generally have no problem switching from one program to the other and back again. The way software is written can be an issue, but the core functions of the computer described above happen in a very reliable way. Of course, there are physical limits to what can be done with serial computing - we can't change the speed of light or the size of atoms or a number of other physical constraints, and so performance cannot continue to improve indefinitely. The big challenge for computers in the near future will be to figure out how to use parallel computing as well as we now use serial computing. Hence all the talk about Multi-core processing (most commonly used with 2 or 4 cores).
Parallel computing can do many things which are far beyond our current technological capabilities. For a perfect example of this, look no further than the human brain. The neurons in our brain are incredibly slow when compared to computer processor speeds, yet we can rapidly do things which are far beyond the abilities of the biggest and most complex computers in existance. The reason for that is that there are truly massive numbers of neurons in our brain, and they're all operating in parallel. Furthermore, their configuration appears to be in flux, frequently changing and adapting to various stimuli. This part is key, as it's not so much the number of neurons we have as how they're organized that matters. In mammals, brain size roughly correlates with the size of the body. Big animals generally have larger brains than small animals, but that doesn't mean they're proportionally more intelligent. An elephant's brain is much larger than a human's brain, but they're obviously much less intelligent than humans.
Of course, we know very little about the details of how our brains work (and I'm not an expert), but it seems clear that brain size or neuron count are not as important as how neurons are organized and crosslinked. The human brain has a huge number of neurons (somewhere on the order of one hundred billion), and each individual neuron is connected to several thousand other neurons (leading to a total number of connections in the hundreds of trillions). Technically, neurons are "digital" in that if you were to take a snapshot of the brain at a given instant, each neuron would be either "on" or "off" (i.e. a 1 or a 0). However, neurons don't work like digital electronics. When a neuron fires, it doesn't just turn on, it pulses. What's more, each neuron is accepting input from and providing output to thousands of other neurons. Each connection has a different priority or weight, so that some connections are more powerful or influential than others. Again, these connections and their relative influence tends to be in flux, constantly changing to meet new needs.
This turns out to be a good thing in that it gives us the capability to be creative and solve problems, to be unpredictable - things humans cherish and that computers can't really do on their own.
However, this all comes with its own set of tradeoffs. With respect to this post, the most relevant of which is that humans aren't particularly good at doing context switches. Our brains are actually great at processing a lot of information in parallel. Much of it is subconscious - heart pumping, breathing, processing sensory input, etc... Those are also things that we never really cease doing (while we're alive, at least), so those resources are pretty much always in use. But because of the way our neurons are interconnected, sometimes those resources trigger other processing. For instance, if you see something familiar, that sensory input might trigger memories of childhood (or whatever).
In a computer, everything is happening in serial and thus it is easy to predict how various inputs will impact the system. What's more, when a computer stores its CPU's current state in memory, that state can be restored later with perfect accuracy. Because of the interconnected and parallel nature of the brain, doing this sort of context switching is much more difficult. Again, we know very little about how the humain brain really works, but it seems clear that there is short-term and long-term memory, and that the process of transferring data from short-term memory to long-term memory is lossy. A big part of what the brain does seems to be filtering data, determining what is important and what is not. For instance, studies have shown that people who do well on memory tests don't necessarily have a more effective memory system, they're just better at ignoring unimportant things. In any case, human memory is infamously unreliable, so doing a context switch introduces a lot of thrash in what you were originally doing because you will have to do a lot of duplicate work to get yourself back to your original state (something a computer has a much easier time doing). When you're working on something specific, you're dedicating a significant portion of your conscious brainpower towards that task. In otherwords, you're probably engaging millions if not billions of neurons in the task. When you consider that each of these is interconnected and working in parallel, you start to get an idea of how complex it would be to reconfigure the whole thing for a new task. In a computer, you need to ensure the current state of a single CPU is saved. Your brain, on the other hand, has a much tougher job, and its memory isn't quite as reliable as a computer's memory. I like to refer to this as metal inertia. This sort of issue manifests itself in many different ways.
One thing I've found is that it can be very difficult to get started on a project, but once I get going, it becomes much easier to remain focused and get a lot accomplished. But getting started can be a problem for me, and finding a few uninterrupted hours to delve into something can be difficult as well. One of my favorite essays on the subject was written by Joel Spolsky - its called Fire and Motion. A quick excerpt:
Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I've got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don't realize that it's already 7:30 pm.I've found this sort of mental inertia to be quite common, and it turns out that there are several areas of study based around this concept. The state of thought where your brain is up to speed and humming along is often referred to as "flow" or being "in the zone." This is particularly important for working on things that require a lot of concentration and attention, such as computer programming or complex writing.
From my own personal experience a couple of years ago during a particularly demanding project, I found that my most productive hours were actually after 6 pm. Why? Because there were no interruptions or distractions, and a two hour chunk of uninterrupted time allowed me to get a lot of work done. Anecdotal evidence suggests that others have had similar experiences. Many people come into work very early in the hopes that they will be able to get more done because no one else is here (and complain when people are here that early). Indeed, a lot of productivity suggestions basically amount to carving out a large chunk of time and finding a quiet place to do your work.
A key component of flow is finding a large, uninterrupted chunk of time in which to work. It's also something that can be difficult to do here at a lot of workplaces. Mine is a 24/7 company, and the nature of our business requires frequent interruptions and thus many of us are in a near constant state of context switching. Between phone calls, emails, and instant messaging, we're sure to be interrupted many times an hour if we're constantly keeping up with them. What's more, some of those interruptions will be high priority and require immediate attention. Plus, many of us have large amounts of meetings on our calendars which only makes it more difficult to concentrate on something important.
Tell me if this sounds familiar: You wake up early and during your morning routine, you plan out what you need to get done at work today. Let's say you figure you can get 4 tasks done during the day. Then you arrive at work to find 3 voice messages and around a hundred emails and by the end of the day, you've accomplished about 15 tasks, none of which are the 4 you had originally planned to do. I think this happens more often than we care to admit.
Another example, if it's 2:40 pm and I know I have a meeting at 3 pm - should I start working on a task I know will take me 3 solid hours or so to complete? Probably not. I might be able to get started and make some progress, but as soon my brain starts firing on all cylinders, I'll have to stop working and head to the meeting. Even if I did get something accomplished during those 20 minutes, chances are when I get back to my desk to get started again, I'm going to have to refamiliarize myself with the project and what I had already done before proceeding.
Of course, none of what I'm saying here is especially new, but in today's world it can be useful to remind ourselves that we don't need to always be connected or constantly monitoring emails, RSS, facebook, twitter, etc... Those things are excellent ways to keep in touch with friends or stay on top of a given topic, but they tend to split attention in many different directions. It's funny, when you look at a lot of attempts to increase productivity, efforts tend to focus on managing time. While important, we might also want to spend some time figuring out how we manage our attention (and the things that interrupt it).
(Note: As long and ponderous as this post is, it's actually part of a larger series of posts I have planned. Some parts of the series will not be posted here, as they will be tailored towards the specifics of my workplace, but in the interest of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Also, given the nature of this post, it makes sense to pursue interests in my personal life that could be repurposed in my professional life (and vice/versa).)
Posted by Mark on June 28, 2009 at 03:44 PM .: link :.
Wednesday, January 07, 2009
For obvious reasons, time is a little short these days, so here are a few links I've found interesting lately:
Posted by Mark on January 07, 2009 at 08:56 PM .: link :.
Sunday, November 02, 2008
At ZDNet, Robin Harris makes a mildly persuasive argument that Blu-Ray is dying and will end up becoming a videophile niche format like laserdisc. When Toshiba threw in the towel and gave up on HD-DVD about 8 months ago, it looked like a major victory for Sony on multiple fronts. First, they were the uncontested heir to the HD movie market and second, fence sitters in the next-gen gaming console market had a reason to plunk down a little extra for a PS3. But 8 months later, things haven't changed a whole lot. Standalone BR players have come down in price and will be reaching affordable levels shortly. PS3 sales received a bump, overtaking the XBox sporadically during this year, but it looks like Microsoft's price cut has reestablished PS3 as the loser of the next-gen gaming market (of course, both are being clobbered by Nintendo). Sony is betting on the release of several highly anticipated games for the PS3 this holiday season, which should sell consoles and thus increase BR market penetration.
There are lots of things to consider here:
Posted by Mark on November 02, 2008 at 01:02 PM .: link :.
Wednesday, September 24, 2008
A few years ago, The Onion put out a book called Our Dumb Century. It was comprised of a series of newspaper front pages, one from each year. It was an interesting book, in part because of the events they chose to represent each year and also because The Onion writers are hilarious. The most brilliant entry in the book was from the 1969 edition of the paper:
Utterly brilliant. You can't read it on that small copy, but there's a whole profanity-laden exchange between Houston and Tranquility Base that's also hysterically funny. As it turns out, The Onion folks went ahead and made a video, complete with archival footage and authentic sounding voices, beeps, static, etc... Incredibly funny. [video via Need Coffee]
Update: Weird, I tried to embed the video in this post, but when you click play it says it's no longer available... but if you go directly to youtube, you can get the video. I'm taking out the embedded video and putting in the link for now.
Posted by Mark on September 24, 2008 at 10:04 PM .: link :.
Sunday, May 11, 2008
Link Dump: Space!
Time is short, so just a few space themed links for you:
Posted by Mark on May 11, 2008 at 09:57 PM .: link :.
Sunday, April 27, 2008
The recent bout with myTV on DVD addiction necessitated an increase in Netflix usage, which made me curious. How well have I really taken advantage of the Netflix service, and is it worth the monthly expense?
If I were to rent a movie at a local video store like Blockbuster, each rental would cost somewhere around $4 (this is an extremely charitable estimate, as I'm sure it's probably closer to $5 at this point), plus the expense in time and effort (I mean, come on, I'd have to drive about a mile out of my way to go to one of these places!) Netflix costs me $15.99 a month for the 3-disc-at-a-time plan (this plan was $17.99 when I signed up, but decreased in price two times during around two years of membership), so it takes about 4-5 Netflix rentals to recoup my costs and bring the price of an average rental down below $4. I've been a member for one year and ten months... how did I do (click for a larger version)?
A few notes on the data:
This has been an interesting exercise, because I feel like I'm a little more consistent than the data actually shows. I'm really surprised that there are several months where my rentals went down to 6... I could have sworn I watched at least 2-3 discs a week, with the occasional exception. Still, an average of 9 movies a month is nothing to sneeze about, I guess. I've heard horror stories of where Netflix will start throttling you and take longer to deliver discs if you go above a certain amount of rentals per month (at a certain point, the cost of processing your rentals becomes more than you're paying, which I guess is what prompts Netflix to start throttling you), but I haven't had a problem yet. If I keep up my recent viewing habits though, this could change...
Posted by Mark on April 27, 2008 at 11:09 PM .: link :.
Wednesday, December 05, 2007
Every so often, I see someone who is genuinely concerned with reaching the unreachable. Whether it be scientists who argue about how to frame their arguments, alpha-geek programmers who try to figure out how to reach typical, average programmers, or critics who try to open a dialogue with feminists. Debates tend to polarize, and when it comes to politics or religion, assumptions of bad faith on both sides tend to derail discussions pretty quickly.
How do you reach the unreachable? Naturally, the topic is much larger than a single blog entry, but I did run accross an interesting post by Jon Udell that outlines Charles Darwin's rhetorical strategy in the book, On the Origin of Species (which popularized the theory of evolution).
Darwin, says Slatkin, was like a salesman who finds lots of little ways to get you to say yes before you're asked to utter the big yes. In this case, Darwin invited people to affirm things they already knew, about a topic much more familiar in their era than in ours: domestic species. Did people observe variation in domestic species? Yes. And as Darwin piles on the examples, the reader says, yes, yes, OK, I get it, of course I see that some pigeons have longer tail feathers. Did people observe inheritance? Yes. And again, as he piles on the examples, the reader says yes, yes, OK, I get it, everyone knows that that the offspring of longer-tail-feather pigeons have longer tail feathers.I think Udell simplifies the inception and development of the idea of evolution, but I think the point generally holds. Darwin's ideas didn't come into mainstream prominence until he published his book, decades after he had begun his work. Obviously, Darwin's strategy isn't applicable in every situation, but it is an interesting place to start (I suppose we should keep in mind that evolution is still controversial amongst the mainstream)...
Posted by Mark on December 05, 2007 at 08:29 PM .: link :.
Wednesday, November 28, 2007
Facial Expressions and the Closed Eye Syndrome
I've been reading Malcolm Gladwell's book, Blink, and one of the chapters focuses on the psychology of facial expressions. Put simply, we wear our emotions on our face, and some enterprising psychologists took to mapping the distinct muscular movements that the human face can make. It's an interesting process, and it turns out that people who learn these facial expressions (of which there are many) are eerily good at recognizing what people are really thinking, even if they aren't trying to show it. It's almost like mind reading, and we all do it to some extent or another (mostly, we do it unconsciously). Body language and facial expressions are packed with information, and we'd all be pretty much lost without that kind of feedback (perhaps why misunderstandings are more common on the phone or in email). Most of the time, our expressions are voluntary, but sometimes they're not. Even if we're trying to suppress our expressions, a fleeting look may cross our faces. Often, these "micro-expressions" last only a few milliseconds and are imperceptible, but when trained psychologists watch video of, say, Harold "Kim" Philby (a notorious soviet spy) giving a press conference, they're able to read him like a book (slow motion helps).
I found this example interesting, and it highlights some of the subtle differences that can exist between expressions (in this case, between a voluntary and involuntary expression):
If I were to ask you to smile, you would flex your zygomatic major. By contrast, if you were to smile spontaneously, in the presence of genuine emotion, you would not only flex your zygomatic but also tighten the orbicularis oculi, pars orbitalis, which is the muscle that encircles the eye. It is almost impossible to tighten the orbicularis oculi, pars orbitalis on demand, and it is equally difficult to stop it from tightening when we smile at something genuinely pleasurable.I found that interesting in light of the Closed Eye Syndrome I noticed in Anime. I wonder how that affects the way we perceive Anime. If a smiling mouth by itself means a fake expression of happiness while a smiling mouth and closed eyes means genuine emotion, does that make the animation more authentic? Animation obviously doesn't have the fidelity of video or film, but we can obviously read expressions from animated faces, so I would expect that closed eye syndrome exists more because of accuracy than anything else. In my original post on the subject, Roy noted that the reason I noticed closed eyes in anime could have something to do with the way Japan and the US read emotion. He pointed to an article that claimed Americans focus more on the mouth while the Japanese focus more on the eyes when trying to read emotions from facial expressions. One example from the article was emoticons. For happiness, Americans use a smily face :) while the Japanese tend to use ^_^ (which seems to be a face with eyes closed). That might still be part of it, but ever since I made the observation, I've noticed similar expressions in American animation (I just recently noticed it a lot in a Venture Bros. episode). Still, occurrences in American animation seem less frequent (or perhaps less obvious), so perhaps the observation still holds.
Gladwell's book is interesting, as expected, though I'm not sure yet if he has a point other than to observe that we do a lot of subconscious analysis and make lots of split decisions, and sometimes this is good (other times it's not). Still, he's good at finding examples and drilling down into the issue, and even if I'm not sure about his conclusions, it's always fun to read. There's lots more on this subject in the book (for instance, he goes over how facial expressions and our emotions are a two way phenomenon - meaning that if you intentionally contort your face in an specific way, you can induce certain emotions. The psychologists I mentioned earlier who were mapping expressions noticed that after a full day of trying to manipulate their facial muscles to show anger (even though they weren't angry) they felt horrible. Some tests have been done to confirm that, indeed, our facial expressions are linked directly to our brain) and it's probably worth a read if that's your bag.
Posted by Mark on November 28, 2007 at 08:19 PM .: link :.
Sunday, November 25, 2007
Requiem for a Meme
In July of this year, I attempted to start a Movie Screenshot Meme. The idea was simple and (I thought) neat. I would post a screenshot, and visitors would guess what movie it was from. The person who guessed correctly would continue the game by either posting the next round on their blog, or if they didn't have a blog, they could send me a screenshot or just ask me to post another round. Things went reasonably well at first, and the game experienced some modest success. However, the game eventually morphed into the Mark, Alex, and Roy show, as the rounds kept cycling through each of our blogs. The last round was posted in September and despite a winning entry, the game has not continued.
The challenge of starting this meme was apparent from the start, but there were some other things that hindered the game a bit. Here are some assorted thoughts about the game, what held it back, and what could be done to improve the chances of adoption.
(click image for a larger version) I'd say this is difficult except that it's blatantly obvious who that is in the screenshot. It shouldn't be that hard to pick out the movie even if you haven't seen it. What the heck, the winner of this round can pick 5 blogs they'd like to see post a screenshot and post a screenshot on their blog if they desire. As I mentioned above, I'm hesitant to annoy people with this sort of thing, but hey, why not? Let's give this meme some legs.
Posted by Mark on November 25, 2007 at 03:04 PM .: link :.
Sunday, November 18, 2007
The Paradise of Choice?
A while ago, I wrote a post about the Paradox of Choice based on a talk by Barry Schwartz, the author of a book by the same name. The basic argument Schwartz makes is that choice is a double-edged sword. Choice is a good thing, but too much choice can have negative consequences, usually in the form of some kind of paralysis (where there are so many choices that you simply avoid the decision) and consumer remorse (elevated expectations, anticipated regret, etc...). The observations made by Schwartz struck me as being quite astute, and I've been keenly aware of situations where I find myself confronted with a paradox of choice ever since. Indeed, just knowing and recognizing these situations seems to help deal with the negative aspects of having too many choices available.
This past summer, I read Chris Anderson's book, The Long Tail, and I was a little pleasantly surprised to see a chapter in his book titled "The Paradise of Choice." In that chapter, Anderson explicitely addresses Schwartz's book. However, while I liked Anderson's book and generally agreed with his basic points, I think his dismissal of the Paradox of Choice is off target. Part of the problem, I think, is that Anderson is much more concerned with the choices rather than the consequences of those choices (which is what Schwartz focuses on). It's a little difficult to tell though, as Anderson only dedicates 7 pages or so to the topic. As such, his arguments don't really eviscerate Schwartz's work. There are some good points though, so let's take a closer look.
Anderson starts with a summary of Schwartz's main concepts, and points to some of Schwartz's conclusions (from page 171 in my edition):
As the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize.Now, the way Anderson presents this is a bit out of context, but we'll get to that in a moment. Anderson continues and then responds to some of these points (again, page 171):
As an antidote to this poison of our modern age, Schwartz recommends that consumers "satisfice," in the jargon of social science, not "maximize". In other words, they'd be happier if they just settled for what was in front of them rather than obsessing over whether something else might be even better. ...Anderson has completely missed the point here. Later in the chapter, he spends a lot of time establishing that people do, in fact, like choice. And he's right. My problem is twofold: First, Schwartz never denies that choice is a good thing, and second, he never advocates removing choice in the first place. Yes, people love choice, the more the better. However, Schwartz found that even though people preferred more options, they weren't necessarily happier because of it. That's why it's called the paradox of choice - people obviously prefer something that ends up having negative consequences. Schwartz's book isn't some sort of crusade against choice. Indeed, it's more of a guide for how to cope with being given too many choices. Take "satisficing." As Tom Slee notes in a critique of this chapter, Anderson misstates Schwartz's definition of the term. He makes it seem like satisficing is settling for something you might not want, but Schwartz's definition is much different:
To satisfice is to settle for something that is good enough and not worry about the possibility that there might be something better. A satisficer has criteria and standards. She searches until she finds an item that meets those standards, and at that point, she stops.Settling for something that is good enough to meet your needs is quite different than just settling for what's in front of you. Again, I'm not sure Anderson is really arguing against Schwartz. Indeed, Anderson even acknowledges part of the problem, though he again misstate's Schwartz's arguments:
Vast choice is not always an unalloyed good, of course. It too often forces us to ask, "Well, what do I want?" and introspection doesn't come naturally to all. But the solution is not to limit choice, but to order it so it isn't oppressive.Personally, I don't think the problem is that introspection doesn't come naturally to some people (though that could be part of it), it's more that some people just don't give a crap about certain things and don't want to spend time figuring it out. In Schwartz's talk, he gave an example about going to the Gap to buy a pair of jeans. Of course, the Gap offers a wide variety of jeans (as of right now: Standard Fit, Loose Fit, Boot Fit, Easy Fit, Morrison Slim Fit, Low Rise Fit, Toland Fit, Hayes Fit, Relaxed Fit, Baggy Fit, Carpenter Fit). The clerk asked him what he wanted, and he said "I just want a pair of jeans!"
The second part of Anderson's statement is interesting though. Aside from again misstating Schwartz's argument (he does not advocate limiting choice!), the observation that the way a choice is presented is important is interesting. Yes, the Gap has a wide variety of jean styles, but look at their website again. At the top of the page is a little guide to what each of the styles means. For the most part, it's helpful, and I think that's what Anderson is getting at. Too much choice can be oppressive, but if you have the right guide, you can get the best of both worlds. The only problem is that finding the right guide is not as easy as it sounds. The jean style guide at Gap is neat and helpful, but you do have to click through a bunch of stuff and read it. This is easier than going to a store and trying all the varieties on, but it's still a pain for someone who just wants a pair of jeans dammit.
Anderson spends some time fleshing out these guides to making choices, noting the differences between offline and online retailers:
In a bricks-and-mortar store, products sit on the shelf where they have been placed. If a consumer doesn't know what he or she wants, the only guide is whatever marketing material may be printed on the package, and the rough assumption that the product offered in the greatest volume is probably the most popular.I think it's a very good point he's making, though I think he's a bit too optimistic about how effective these guides to buying really are. For one thing, there are times when a choice isn't clear, even if you do have a guide. Also, while I think retailers that offer Recommendations based on what other customer purchases are important and helpful, who among us hasn't seen absurd recommendations? From my personal experience, a lot of people don't like the connotations of recommendations either (how do they know so much about me? etc...). Personally, I really like recommendations, but I'm a geek and I like to figure out why they're offering me what they are (Amazon actually tells you why something is recommended, which is really neat). In any case, from my own personal anecdotal observations, no one puts much faith in probablistic systems like recommendations or ratings (for a number of reasons, such as cheating or distrust). There's nothing wrong with that, and that's part of why such systems are effective. Ironically, acknowledging their imperfections allow users to better utilize the systems. Anderson knows this, but I think he's still a bit too optimistic about our tools for traversing the long tail. Personally, I think they need a lot of work.
When I was younger, one of the big problems in computing was storage. Computers are the perfect data gatering tool, but you need somewhere to store all that data. In the 1980s and early 1990s, computers and networks were significantly limited by hardware, particularly storage. By the late 1990s, Moore's law had eroded this deficiency significantly, and today, the problem of storage is largely solved. You can buy a terrabyte of storage for just a couple hundred dollars. However, as I'm fond of saying, we don't so much solve problems as trade one set of problems for another. Now that we have the ability to store all this information, how do we get at it in a meaninful way? When hardware was limited, analysis was easy enough. Now, though, you have so much data available that the simple analyses of the past don't cut it anymore. We're capturing all this new information, but are we really using it to its full potential?
I recently caught up with Malcolm Gladwell's article on the Enron collapse. The really crazy thing about Enron was that they didn't really hide what they were doing. They fully acknowledged and disclosed what they were doing... there was just so much complexity to their operations that no one really recognized the issues. They were "caught" because someone had the persistence to dig through all the public documentation that Enron had provided. Gladwell goes into a lot of detail, but here are a few excerpts:
Enron's downfall has been documented so extensively that it is easy to overlook how peculiar it was. Compare Enron, for instance, with Watergate, the prototypical scandal of the nineteen-seventies. To expose the White House coverup, Bob Woodward and Carl Bernstein used a source-Deep Throat-who had access to many secrets, and whose identity had to be concealed. He warned Woodward and Bernstein that their phones might be tapped. When Woodward wanted to meet with Deep Throat, he would move a flower pot with a red flag in it to the back of his apartment balcony. That evening, he would leave by the back stairs, take multiple taxis to make sure he wasn't being followed, and meet his source in an underground parking garage at 2 A.M. ...Again, there's a lot more detail in Gladwell's article. Just how complicated was the public documentation that Enron had released? Gladwell gives some examples, including this one:
Enron's S.P.E.s were, by any measure, evidence of extraordinary recklessness and incompetence. But you can't blame Enron for covering up the existence of its side deals. It didn't; it disclosed them. The argument against the company, then, is more accurately that it didn't tell its investors enough about its S.P.E.s. But what is enough? Enron had some three thousand S.P.E.s, and the paperwork for each one probably ran in excess of a thousand pages. It scarcely would have helped investors if Enron had made all three million pages public. What about an edited version of each deal? Steven Schwarcz, a professor at Duke Law School, recently examined a random sample of twenty S.P.E. disclosure statements from various corporations-that is, summaries of the deals put together for interested parties-and found that on average they ran to forty single-spaced pages. So a summary of Enron's S.P.E.s would have come to a hundred and twenty thousand single-spaced pages. What about a summary of all those summaries? That's what the bankruptcy examiner in the Enron case put together, and it took up a thousand pages. Well, then, what about a summary of the summary of the summaries? That's what the Powers Committee put together. The committee looked only at the "substance of the most significant transactions," and its accounting still ran to two hundred numbingly complicated pages and, as Schwarcz points out, that was "with the benefit of hindsight and with the assistance of some of the finest legal talent in the nation."Again, Gladwell's article has a lot of other details and is a fascinating read. What interested me the most, though, was the problem created by so much data. That much information is useless if you can't sift through it quickly or effectively enough. Bringing this back to the paradise of choice, the current systems we have for making such decisions are better than ever, but still require a lot of improvement. Anderson is mostly talking about simple consumer products, so none are really as complicated as the Enron case, but even then, there are still a lot of problems. If we're really going to overcome the paradox of choice, we need better information analysis tools to help guide us. That said, Anderson's general point still holds:
More choice really is better. But now we know that variety alone is not enough; we also need information about that variety and what other consumers before us have done with the same choices. ... The paradox of choice turned out to be more about the poverty of help in making that choice than a rejection of plenty. Order it wrong and choice is oppressive; order it right and it's liberating.Personally, while the help in making choices has improved, there's still a long way to go before we can really tackle the paradox of choice (though, again, just knowing about the paradox of choice seems to do wonders in coping with it).
As a side note, I wonder if the video game playing generations are better at dealing with too much choice - video games are all about decisions, so I wonder if folks who grew up working on their decision making apparatus are more comfortable with being deluged by choice.
Posted by Mark on November 18, 2007 at 09:47 PM .: link :.
Wednesday, October 17, 2007
The Spinning Silhouette
This Spinning Silhouette optical illusion is making the rounds on the internet this week, and it's being touted as a "right brain vs left brain test." The theory goes that if you see the silhouette spinning clockwise, you're right brained, and you're left brained if you see it spinning counterclockwise.
Everytime I looked at the damn thing, it was spinning a different direction. I closed my eyes and opened them again, and it spun a different direction. Every now and again, and it would stay the same direction twice in a row, but if I looked away and looked back, it changed direction. Now, if I focus my eyes on a point below the illusion, it doesn't seem to rotate all the way around at all, instead it seems like she's moving from one side to the other, then back (i.e. changing directions every time the one leg reaches the side of the screen - and the leg always seems to be in front of the silhouette).
Of course, this is the essense of the illusion. The silhouette isn't actually spinning at all, because it's two dimensional. However, since my brain is used to living in a three dimensional world (and thus parsing three dimensional images), it's assuming that the image is also three dimensional. We're actually making lots of assumptions about the image, and that's why we can see it going one way or the other.
Eventually, after looking at the image for a while and pondering the issues, I got curious. I downloaded the animated gif and opened it up in the GIMP to see how the frames are built. I could be wrong, but I'm pretty sure this thing is either broken or it's cheating. Well, I shouldn't say that. I noticed something off on one of the frames, and I'd be real curious to know how that affects people's perception of the illusion (to me, it means the image is definitely moving counterclockwise). I'm almost positive that it's too subtle to really affect anything, but I did find it interesting. More on this, including images and commentary, below the fold. First thing's first, here's the actual spinning silhouette.
Again, some of you will see it spinning in one direction, some in the other direction. Everyone seems to have a different trick for getting it to switch direction. Some say to focus on the shadow, some say to look at the ankles. Closing my eyes and reopening seems to do the trick for me. Now let's take a closer look at one of the frames. Here's frame 12:
Looking at this frame, you should be able to switch back and forth, seeing the leg behind the person or in front of the person. Again, because it's a silhouette and a two dimensional image, our brain usually makes an assumption of depth, putting the leg in front or behind the body. Switching back and forth on this static image was actually a lot easier for me. Now the tricky part comes in the next frame, number 13 (obviously, the arrow was added by me):
Now, if you look closely at the leg, you'll see a little imperfection in the silhouette. Maybe I'm wrong, but that little gash in the leg seems to imply that the leg is behind the body. If you try, you can still get yourself to see the image as having the leg in front, but then you've got this gash in the leg that just seems very out of place.
So what to make of this? First, the imperfection is subtle enough (it's on 1 frame out of 34) that everyone still seems to be able to see it rotate in both directions. Second, maybe I'm crazy, and the little gash doesn't imply what I think. Anyone have alternative explanations? Third, is that imperfection intentional? If so, why? It does not seem necessary, so I'd be curious to know if the creators knew about it, and what their intention was regarding it.
Finally, as far as the left brain versus right brain portion, I find that I don't really care, but I am interested in how the imperfection would affect this "test." This neuroscientist seems to be pretty adamant about the whole left/right thing being hogwash though:
...the notion that someone is "left-brained" or "right-brained" is absolute nonsense. All complex behaviours and cognitive functions require the integrated actions of multiple brain regions in both hemispheres of the brain. All types of information are probably processed in both the left and right hemispheres (perhaps in different ways, so that the processing carried out on one side of the brain complements, rather than substitutes, that being carried out on the other).At the very least, the traditional left/right brain theory is a wildly oversimplified version of what's really happening. The post also goes into the way the brain "fill in the gaps" for confusing visual information, thus allowing the illusion.
Update: Strange - the image appears to be rotating MUCH faster in Firefox than in Opera or IE. I wonder how that affects perception.
Posted by Mark on October 17, 2007 at 10:42 PM .: link :.
Sunday, June 03, 2007
The Long Tail of Forgotten Works
I'm currently reading Chris Anderson's book The Long Tail, and he relates a story about how some books find an audience long after they've been published.
In 1988, a British mountain climber named Joe Simpson wrote a book called Touching the Void, a harrowing account of near death in the Peruvian Andes. Though reviews for the book were good, it was only a modest success, and soon was largely forgotten. Then, a decade later, a strange thing happened. Jon Krakauer wrote Into Thin Air, another book about a mountain-climbing tragedy, which became a publishing sensation. Suddenly Touching the Void started to sell again.There is something interesting going on here. I'm wondering how many great works of art are simply lost in obscurity. These days, we've got the internet and primitive tools to traverse the long tail, so it seems that a lot of obscure works find a new audience when a new, similar work is released. But what happened before the internet? How many works have simply gone out of print because they never found an audience - how many works suffered the fate Touching the Void narrowly avoided?
Of course, I have no idea (that's kinda the point), but one of the great things about the internet and the emerging infinite shelf space of online retailers is that some of these obscure works are rediscovered and new connections are made. For instance, I once came accross a blog post by Jonathon Delacour about this obscure Japanese horror film called Matango: Attack of the Mushroom People. The description of the film?
After a yacht is damaged in a storm and stranded on a deserted island, the passengers: a psychologist, his girlfriend, a wealthy businessman, a famous singer, a writer, a sailor and his skipper take refuge in a fungus covered boat. While using the mushrooms for sustenance, they find the ship's journal describing the mushrooms to be poisonous, however some members of the shipwrecked party continue to ingest the mysterious fungi transforming them into hideous fungal monsters.Sound familiar? As Delacour notes, a reviewer on Amazon.com sure thinks so:
Was this the Inspiration for Gilligan's Island? ...and that's a serious question. It predated the premier of Gillian's Island by several years. There's a millionaire who owns a yacht that looks like the Minnow. On board is a professor, the captain, a goofy (though somewhat sinster in the film) first mate, a pretty but shy country girl named Okiko, and a singer/movie star. There are seven castaways in all. "Lovey" is replaced by another male character, a writer named Roy. The boat crashes into an island where they are castaways... Course on Gilligan's Island they didn't all turn into mutated mushrooms monsters. Rent or buy the DVD (one of my favorite films in Japanese cinema, finally getting its due...) and you tell me if Gilligan's Island isn't a complete rip-off of this film.Several reviewers actually make the Gilligan's Island connection, and one even takes time to refute the claim that Gilligan ripped off Matango:
Actually as stated on this DVD's actor commentary Matango premiered in Japanese theaters in and around mid 1963. The Gilligan's Island first pilot (with different actors as The Professor and Ginger)was made in late 1963 thus the Japanese film does not predate Gilligan by a few years as another poster here thinks.Schwartz could have heard about a Japanese film made with seven castaways (as Hollywood and Tokoyo's Toho were in communication). But he definitely didn't see the Japanese film before he pitched gI to the networks in early 63.So perhaps this was just a happy coincidence... A commentor on Delacour's post mentions that the movie is loosely based on a 1907 short story by William Hope Hodgson called The Voice in the Night, but while it certainly was the inspiration behind Matango, it probably didn't inspire Gilligan's Island...
I seem to have veered off track here, but it was an interesting diversion: from obscure Japanese horror film to Gilligan's Island to William Hope Hodgson... would anyone have made these connections 20 years ago? It certainly would have been possible, but I doubt it would happen as quickly or efficiently as it did on the internet.
Posted by Mark on June 03, 2007 at 08:35 PM .: link :.
Sunday, April 29, 2007
Again Cell Phones
About 2 years ago, I started looking around for a new cell phone. At the time, I just wanted a simple, no-frills type phone, but I kept an open mind and looked at some of the more advanced features that were becoming available. I eventually settled on a small, low-end Nokia. I instantly regretted the decision not to get a camera phone, but otherwise, the phone has performed admirably. The only other complaint I really have is that the call volume could stand to be a little louder. In any case, in the comments of one of the above linked posts, I mentioned:
I'm actually kinda surprised that cell phones aren't... better than they are now. I figure in about 2 years, my dream phone will be more attainable, so for now, I'll make do with what I got.Well, it's been 2 years, I'm once again looking into purchasing a new phone and... I'm still surprised that cell phones aren't better than they are right now. Seriously, what the heck is going on? My priorities aren't that unusual and have only changed a little since my last foray: I want a phone that has strong battery life, good call quality (with louder call volume), good usability (i.e. button placement, menu structure, etc...), and a quality camera (at least 1.3 megapixel). There are lots of secondary features and nice-to-haves, but those are the most important things. This is apparently difficult to achieve though, and I'm distinctly underwhelmed by my options. Actually there are a lot of decent phones out there, but I think I've fallen into the classic paradox of choice trap. Here are some phones I'm considering:
Update: Drool. Battery life looks lame, but otherwise it's great. Not that it matters, as it ain't available yet.
Wednesday, March 07, 2007
A System of Warnings
Josh Porter recently wrote about some design principles he uses. As Josh notes, people often confuse design with art. Art is a form of personal expression, while design is about use.
The designer needs someone to use (not only appreciate) what they create. Design doesn't serve its purpose without people to use it. Design helps solve human problems. The highest accolade we can bestow on a design is not that it is beautiful, as we do in Art, but that it is well-used.I think one of the most recognized and perhaps important designs of the past twenty years or so is the Nutrition Facts label. Instantly recognizable and packed with information, yet concise and easy to read and use. It's not glamorous, but it works so well that we barely even notice it. It's great design.
While nutrition is certainly an important subject worthy of a thoughtful design, I recently stumbled upon a design project that is intriguing, difficult and important. In the desert of Southeastern New Mexico lies the Waste Isolation Pilot Plant (WIPP), an undeground radioactive waste repository. Not a pleasant place. During the planning stages of the facility, a panel of experts were tasked with designing a 10,000-year marking system. It's an intriguing design problem. The resulting report is an astounding, powerful and oddly poignant document (excerpts here, huge .pdf version of the full report here). They developed an interesting system here; note, they didn't just create signs, the entire site (from the physical layout to the words and imagery used) was designed to communicate a message across multiple levels, with a high level of redundancy. It's not just a warning, it's a system of interconnected and reinforced warnings. The authors also attempted to anticipate a variety of potential attacks as well. What is the message they wanted to convey? Here's a brief summary:
Wednesday, February 21, 2007
Various links for your enjoyment:
Posted by Mark on February 21, 2007 at 08:16 PM .: link :.
Wednesday, February 14, 2007
Intellectual Property, Copyright and DRM
Roy over at 79Soul has started a series of posts dealing with Intellectual Property. His first post sets the stage with an overview of the situation, and he begins to explore some of the issues, starting with the definition of theft. I'm going to cover some of the same ground in this post, and then some other things which I assume Roy will cover in his later posts.
I think most people have an intuitive understanding of what intellectual property is, but it might be useful to start with a brief definition. Perhaps a good place to start would be Article 1, Section 8 of the U.S. Constitution:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;I started with this for a number of reasons. First, because I live in the U.S. and most of what follows deals with U.S. IP law. Second, because it's actually a somewhat controversial stance. The fact that IP is only secured for "limited times" is the key. In England, for example, an author does not merely hold a copyright on their work, they have a Moral Right.
The moral right of the author is considered to be -- according to the Berne convention -- an inalienable human right. This is the same serious meaning of "inalienable" the Declaration of Independence uses: not only can't these rights be forcibly stripped from you, you can't even give them away. You can't sell yourself into slavery; and neither can you (in Britain) give the right to be called the author of your writings to someone else.The U.S. is different. It doesn't grant an inalienable moral right of ownership; instead, it allows copyright. In other words, in the U.S., such works are considered property (i.e. it can be sold, traded, bartered, or given away). This represents a fundamental distinction that needs to be made: some systems emphasize individual rights and rewards, and other systems are more limited. When put that way, the U.S. system sounds pretty awful, except that it was designed for something different: our system was built to advance science and the "useful arts." The U.S. system still rewards creators, but only as a means to an end. Copyright is granted so that there is an incentive to create. However, such protections are only granted for "limited Times." This is because when a copyright is eternal, the system stagnates as protected peoples stifle competition (this need not be malicious). Copyright is thus limited so that when a work is no longer protected, it becomes freely available for everyone to use and to build upon. This is known as the public domain.
The end goal here is the advancement of society, and both protection and expiration are necessary parts of the mix. The balance between the two is important, and as Roy notes, one of the things that appears to have upset the balance is technology. This, of course, extends as far back as the printing press, records, cassettes, VHS, and other similar technologies, but more recently, a convergence between new compression techniques and increasing bandwidth of the internet created an issue. Most new recording technologies were greeted with concern, but physical limitations and costs generally put a cap on the amount of damage that could be done. With computers and large networks like the internet, such limitations became almost negligible. Digital copies of protected works became easy to copy and distribute on a very large scale.
The first major issue came up as a result of Napster, a peer-to-peer music sharing service that essentially promoted widespread copyright infringement. Lawsuits followed, and the original Napster service was shut down, only to be replaced by numerous decentralized peer-to-peer systems and darknets. This meant that no single entity could be sued for the copyright infringement that occurred on the network, but it resulted in a number of (probably ill-advised) lawsuits against regular folks (the anonymity of internet technology and state of recordkeeping being what it is, this sometimes leads to hilarious cases like when the RIAA sued a 79 year old guy who doesn't even own a computer or know how to operate one).
Roy discusses the various arguments for or against this sort of file sharing, noting that the essential difference of opinion is the definition of the word "theft." For my part, I think it's pretty obvious that downloading something for free that you'd normally have to pay for is morally wrong. However, I can see some grey area. A few months ago, I pre-ordered Tool's most recent album, 10,000 Days from Amazon. A friend who already had the album sent me a copy over the internet before I had actually recieved my copy of the CD. Does this count as theft? I would say no.
The concept of borrowing a Book, CD or DVD also seems pretty harmless to me, and I don't have a moral problem with borrowing an electronic copy, then deleting it afterwords (or purchasing it, if I liked it enough), though I can see how such a practice represents a bit of a slippery slope and wouldn't hold up in an honest debate (nor should it). It's too easy to abuse such an argument, or to apply it in retrospect. I suppose there are arguments to be made with respect to making distinctions between benefits and harms, but I generally find those arguments unpersuasive (though perhaps interesting to consider).
There are some other issues that need to be discussed as well. The concept of Fair Use allows limited use of copyrighted material without requiring permission from the rights holders. For example, including a screenshot of a film in a movie review. You're also allowed to parody copyrighted works, and in some instances make complete copies of a copyrighted work. There are rules pertaining to how much of the copyrighted work can be used and in what circumstances, but this is not the venue for such details. The point is that copyright is not absolute and consumers have rights as well.
Another topic that must be addressed is Digital Rights Management (DRM). This refers to a range of technologies used to combat digital copying of protected material. The goal of DRM is to use technology to automatically limit the abilities of a consumer who has purchased digital media. In some cases, this means that you won't be able to play an optical disc on a certain device, in others it means you can only use the media a certain number of times (among other restrictions).
To be blunt, DRM sucks. For the most part, it benefits no one. It's confusing, it basically amounts to treating legitimate customers like criminals while only barely (if that much) slowing down the piracy it purports to be thwarting, and it's lead to numerous disasters and unintended consequences. Essential reading on this subject is this talk given to Microsoft by Cory Doctorow. It's a long but well written and straightforward read that I can't summarize briefly (please read the whole thing). Some details of his argument may be debateable, but as a whole, I find it quite compelling. Put simply, DRM doesn't work and it's bad for artists, businesses, and society as a whole.
Now, the IP industries that are pushing DRM are not that stupid. They know DRM is a fundamentally absurd proposition: the whole point of selling IP media is so that people can consume it. You can't make a system that will prevent people from doing so, as the whole point of having the media in the first place is so that people can use it. The only way to perfectly secure a piece of digital media is to make it unusable (i.e. the only perfectly secure system is a perfectly useless one). That's why DRM systems are broken so quickly. It's not that the programmers are necessarily bad, it's that the entire concept is fundamentally flawed. Again, the IP industries know this, which is why they pushed the Digital Millennium Copyright Act (DMCA). As with most laws, the DMCA is a complex beast, but what it boils down to is that no one is allowed to circumvent measures taken to protect copyright. Thus, even though the copy protection on DVDs is obscenely easy to bypass, it is illegal to do so. In theory, this might be fine. In practice, this law has extended far beyond what I'd consider reasonable and has also been heavily abused. For instance, some software companies have attempted to use the DMCA to prevent security researchers from exposing bugs in their software. The law is sometimes used to silence critics by threatening them with a lawsuit, even though no copright infringement was committed. The Chilling Effects project seems to be a good source for information regarding the DMCA and it's various effects.
DRM combined with the DMCA can be stifling. A good example of how awful DRM is, and how DMCA can affect the situation is the Sony Rootkit Debacle. Boing Boing has a ridiculously comprehensive timeline of the entire fiasco. In short, Sony put DRM on certain CDs. The general idea was to prevent people from putting the CDs in their computer and ripping them to MP3s. To accomplish this, Sony surreptitiously installed software on customer's computers (without their knowledge). A security researcher happened to notice this, and in researching the matter found that the Sony DRM had installed a rootkit that made the computer vulnerable to various attacks. Rootkits are black-hat cracker tools used to disguise the workings of their malicious software. Attempting to remove the rootkit broke the windows installation. Sony reacted slowly and poorly, releasing a service pack that supposedly removed the rootkit, but which actually opened up new security vulnerabilities. And it didn't end there. Reading through the timeline is astounding (as a result, I tend to shy away from Sony these days). Though I don't believe he was called on it, the security researcher who discovered these vulnerabilities was technically breaking the law, because the rootkit was intended to protect copyright.
A few months ago, my windows computer died and I decided to give linux a try. I wanted to see if I could get linux to do everything I needed it to do. As it turns out, I could, but not legally. Watching DVDs on linux is technically illegal, because I'm circumventing the copy protection on DVDs. Similar issues exist for other media formats. The details are complex, but in the end, it turns out that I'm not legally able to watch my legitimately purchased DVDs on my computer (I have since purchased a new computer that has an approved player installed). Similarly, if I were to purchase a song from the iTunes Music Store, it comes in a DRMed format. If I want to use that format on a portable device (let's say my phone, which doesn't support Apple's DRM format), I'd have to convert it to a format that my portable device could understand, which would be illegal.
Which brings me to my next point, which is that DRM isn't really about protecting copyright. I've already established that it doesn't really accomplish that goal (and indeed, even works against many of the reasons copyright was put into place), so why is it still being pushed? One can only really speculate, but I'll bet that part of the issue has to do with IP owners wanting to "undercut fair use and then create new revenue streams where there were previously none." To continue an earlier example, if I buy a song from the iTunes music store and I want to put it on my non-Apple phone (not that I don't want one of those), the music industry would just love it if I were forced to buy the song again, in a format that is readable by my phone. Of course, that format would be incompatible with other devices, so I'd have to purchase the song again if I wanted to listen to it on those devices. When put in those terms, it's pretty easy to see why IP owners like DRM, and given the general person's reaction to such a scheme, it's also easy to see why IP owners are always careful to couch the debate in terms of piracy. This won't last forever, but it could be a bumpy ride.
Interestingly enough, distributers of digital media like Apple and Yahoo have recently come out against DRM. For the most part, these are just symbolic gestures. Cynics will look at Steve Jobs' Thoughts on Music and say that he's just passing the buck. He knows customers don't like or understand DRM, so he's just making a calculated PR move by blaming it on the music industry. Personally, I can see that, but I also think it's a very good thing. I find it encouraging that other distributers are following suit, and I also hope and believe this will lead to better things. Apple has proven that there is a large market for legally purchased music files on the internet, and other companies have even shown that selling DRM-free files yields higher sales. Indeed, the emusic service sells high quality, variable bit rate MP3 files without DRM, and it has established emusic as the #2 retailer of downloadable music behind the iTunes Music Store. Incidentally, this was not done for pure ideological reasons - it just made busines sense. As yet, these pronouncements are only symbolic, but now that online media distributers have established themselves as legitimate businesses, they have ammunition with which to challenge the IP holders. This won't happen overnight, but I think the process has begun.
Last year, I purchased a computer game called Galactic Civilizations II (and posted about it several times). This game was notable to me (in addition to the fact that it's a great game) in that it was the only game I'd purchased in years that featured no CD copy protection (i.e. DRM). As a result, when I bought a new computer, I experienced none of the usual fumbling for 16 digit CD Keys that I normally experience when trying to reinstall a game. Brad Wardell, the owner of the company that made the game, explained his thoughts on copy protection on his blog a while back:
I don't want to make it out that I'm some sort of kumbaya guy. Piracy is a problem and it does cost sales. I just don't think it's as big of a problem as the game industry thinks it is. I also don't think inconveniencing customers is the solution.For him, it's not that piracy isn't an issue, it's that it's not worth imposing draconian copy protection measures that infuriate customers. The game sold much better than expected. I doubt this was because they didn't use DRM, but I can guarantee one thing: People don't buy games because they want DRM. However, this shows that you don't need DRM to make a successful game.
The future isn't all bright, though. Peter Gutmann's excellent Cost Analysis of Windows Vista Content Protection provides a good example of how things could get considerably worse:
Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called "premium content", typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it's not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server).This is infuriating. In case you can't tell, I've never liked DRM, but at least it could be avoided. I generally take articles like the one I'm referencing with a grain of salt, but if true, it means that the DRM in Vista is so oppressive that it will raise the price of hardware And since Microsoft commands such a huge share of the market, hardware manufacturers have to comply, even though a some people (linux users, Mac users) don't need the draconian hardware requirements. This is absurd. Microsoft should have enough clout to stand up to the media giants, there's no reason the DRM in Vista has to be so invasive (or even exist at all). As Gutmann speculates in his cost analysis, some of the potential effects of this are particularly egregious, to the point where I can't see consumers standing for it.
My previous post dealt with Web 2.0, and I posted a YouTube video that summarized how changing technology is going to force us to rethink a few things: copyright, authorship, identity, ethics, aesthetics, rhetorics, governance, privacy, commerce, love, family, ourselves. All of these are true. Earlier, I wrote that the purpose of copyright was to benefit society, and that protection and expiration were both essential. The balance between protection and expiration has been upset by technology. We need to rethink that balance. Indeed, many people smarter than I already have. The internet is replete with examples of people who have profited off of giving things away for free. Creative Commons allows you to share your content so that others can reuse and remix your content, but I don't think it has been adopted to the extent that it should be.
To some people, reusing or remixing music, for example, is not a good thing. This is certainly worthy of a debate, and it is a discussion that needs to happen. Personally, I don't mind it. For an example of why, watch this video detailing the history of the Amen Break. There are amazing things that can happen as a result of sharing, reusing and remixing, and that's only a single example. The current copyright environment seems to stifle such creativity, not the least of which because copyright lasts so long (currently the life of the author plus 70 years). In a world where technology has enabled an entire generation to accellerate the creation and consumption of media, it seems foolish to lock up so much material for what could easily be over a century. Despite all that I've written, I have to admit that I don't have a definitive answer. I'm sure I can come up with something that would work for me, but this is larger than me. We all need to rethink this, and many other things. Maybe that Web 2.0 thing can help.
Update: This post has mutated into a monster. Not only is it extremely long, but I reference several other long, detailed documents and even somewhere around 20-25 minutes of video. It's a large subject, and I'm certainly no expert. Also, I generally like to take a little more time when posting something this large, but I figured getting a draft out there would be better than nothing. Updates may be made...
Update 2.15.07: Made some minor copy edits, and added a link to an Ars Technica article that I forgot to add yesterday.
Posted by Mark on February 14, 2007 at 11:44 PM .: link :.
Wednesday, January 10, 2007
A couple of years ago, I was in the market for a new phone. After looking around at all the options and features, I ended up settling on a relatively "low-end" phone that was good for calls and SMS and that's about it. It was small, simple, and to the point, and while it has served me well, I have kinda regretted not getting a camera in the phone (this is the paradox of choice in action). I considered the camera phone, as well as phones that played music (three birds with one stone!), but it struck me that feature packed devices like that simply weren't ready yet. They were expensive, clunky, and the interface looked awful.
Enter Apple's new iPhone. Put simply, they've done a phenominal job with this phone. I'm impressed. Watch the keynote presentation here. Some highlights that I found interesting:
Updates: Brian Tiemann has further thoughts. Kevin Murphy has some thoughts as well. Ars Technica also notes some issues with the iPhone, and has some other good commentary (actually, just read their Infiinite Loop journal). I think the biggest issue I forgot to mention is that the iPhone is exclusive to Cingular (and you have to get a 2 year plan at that).
Sunday, November 19, 2006
Time is short this week, so a few quick links:
Update: This Lists of Bests website is neat. It remembers what movies you've seen, and applies them to other lists. For example, without even going through the AFI top 100, I know that I've seen at least 41% of the list (because of all the stuff I noted when going through the top 1000). You can also compare yourself with other people on the site, and invite others to do so as well. Cool stuff.
Sunday, September 17, 2006
A few weeks ago, I wrote about magic and how subconscious problem solving can sometimes seem magical:
When confronted with a particularly daunting problem, I'll work on it very intensely for a while. However, I find that it's best to stop after a bit and let the problem percolate in the back of my mind while I do completely unrelated things. Sometimes, the answer will just come to me, often at the strangest times. Occasionally, this entire process will happen without my intending it, but sometimes I'm deliberately trying to harness this subconscious problem solving ability. And I don't think I'm doing anything special here; I think everyone has these sort of Eureka! moments from time to time. ...And indeed, Jason Kottke recently posted about how design works, referencing a couple of other designers, including Michael Bierut of Design Observer, who describes his process like this:
When I do a design project, I begin by listening carefully to you as you talk about your problem and read whatever background material I can find that relates to the issues you face. If you’re lucky, I have also accidentally acquired some firsthand experience with your situation. Somewhere along the way an idea for the design pops into my head from out of the blue. I can’t really explain that part; it’s like magic. Sometimes it even happens before you have a chance to tell me that much about your problem![emphasis mine] It is like magic, but as Bierut notes, this sort of thing is becoming more important as we move from an industrial economy to an information economy. He references a book about managing artists:
At the outset, the writers acknowledge that the nature of work is changing in the 21st century, characterizing it as "a shift from an industrial economy to an information economy, from physical work to knowledge work." In trying to understand how this new kind of work can be managed, they propose a model based not on industrial production, but on the collaborative arts, specifically theater.This is very interesting and dovetails nicely with several topics covered on this blog. Harnessing self-organizing forces to produce emergent results seems to be rising in importance significantly as we proceed towards an information based economy. As noted, collaboration is key. Older business models seem to focus on a more brute force way of solving problems, but as we proceed we need to find better and faster ways to collaborate. The internet, with it's hyperlinked structure and massive data stores, has been struggling with a data analysis problem since its inception. Only recently have we really begun to figure out ways to harness the collective intelligence of the internet and its users, but even now, we're only scraping the tip of the iceberg. Collaborative projects like Wikipedia or wisdom-of-crowds aggregators like Digg or Reddit represent an interesting step in the right direction. The challenge here is that we're not facing the problems directly anmore. If you want to create a comprehensive encyclopedia, you can hire a bunch of people to research, write, and edit entries. Wikipedia tried something different. They didn't explicitely create an encyclopedia, they created (or, at least, they deployed) a system that made it easy for large amount of people to collaborate on a large amount of topics. The encyclopedia is an emergent result of that collaboration. They sidestepped the problem, and as a result, they have a much larger and dynamic information resource.
None of those examples are perfect, of course, but the more I think about it, the more I think that their imperfection is what makes them work. As noted above, you're probably much better off releasing a site that is imperfect and iterating, making changes and learning from your mistakes as you go. When dealing with these complex problems, you're not going to design the perfect system all at once. I realize that I keep saying we need better information aggregation and analysis tools, and that we have these tools, but they leave something to be desired. The point of these systems, though, is that they get better with time. Many older information analysis systems break when you increase the workload quickly. They don't scale well. These newer systems only really work well once they have high participation rates and large amounts of data.
It remains to be seen whether or not these systems can actually handle that much data (and participation), but like I said, they're a good start and they're getting better with time.
Sunday, September 10, 2006
Time is short this week, so it's time for Yet Another Link Dump (YALD!):
Shockingly, it seems that I only needed to use two channels on my Monster FM Transmitter and both of those channels are the ones I use around Philly. Despite this, I've not been too happy with my FM transmitter thingy. It get's the job done, I guess, but I find myself consistently annoyed at its performace (this trip being an exception). It seems that these things are very idiosyncratic and unpredictible, working in some cars better than others (thus some people swear by one brand, while others will badmouth that same brand). In large cities like New York and Philadelphia, the FM dial gets crowded and thus it's difficult to find a suitable station, further complicating matters. I think my living in a major city area combined with an awkward placement of the cigarrette lighter in my car (which I assume is a factor) makes it somewhat difficult to find a good station. What would be really useful would be a list of available stations and an attempt to figure out ways to troubleshoot your car's idiosyncracies. Perhaps a wiki would work best for this, though I doubt I'll be motivated enought to spend the time installing a wiki system here for this purpose (does a similar site already exist? I did a quick search but came up empty-handed). (There are kits that allow you to tap into your car stereo, but they're costly and I don't feel like paying more for that than I did for the player... )
Posted by Mark on September 10, 2006 at 09:15 PM .: link :.
Sunday, September 03, 2006
Does Magic Exist?
I'm back from my trip and it appears that the guest posting has fallen through. So a quick discussion on magic, which was brought up by a friend on a discussion board I frequent. The question: Does magic exist?
I suppose this depends on how you define magic. Arthur C. Clarke once infamously said that "Any sufficiently advanced technology is indistinguishable from magic." And that's probably true, right? If some guy can bend spoons with his thoughts, there's probably a rational explanation for it... we just haven't figured it out yet. Does it count as magic if we don't know how he's doing it? What about when we do figure out how he's doing it? What if it really was some sort of empirically observable telekinesis?
After all, magicians have been performing for hundreds of years, relying on slight of hand and misdirection1 (amongst other tricks of the trade). However, I suspect that's not the type of answer that's being sought.
One thing I think is interesting is the power of thought and how many religious and "magical" traditions were really just ways to harness thought in a productive fashion. For example, crystal balls are often considered to be a magical way to see the future. While not strictly true, it was found that those who look into crystal balls for a long period of time end up entering a sort of trance, similar to hypnosis, and the human mind is able to make certain connections it would not normally make2. Can such a person see the future? I doubt it, but I don't doubt that such people often experience a "revelation" of sorts, even if it is sometimes misguided.
However, you see something similar, though a lot more controlled and a lot less hokey, in a lot of religious traditions. For instance, take Christian Mass and prayer. Mass offers a number of repetitive aspects like singing combined with several chances for reflection and thought. I've always found that going to mass was very helpful in that it put things in a whole new perspective. Superficial things that worried me suddenly seemed less important and much more approachable. Repetitive rituals (like singing in Church) often bring back powerful feelings of the past, etc... further reinforcing the reflection from a different perspective.
Taking it completely out of the spiritual realm, I see very rational people doing the same thing all the time. They just aren't using the same vocabulary. When confronted with a particularly daunting problem, I'll work on it very intensely for a while. However, I find that it's best to stop after a bit and let the problem percolate in the back of my mind while I do completely unrelated things. Sometimes, the answer will just come to me, often at the strangest times. Occasionally, this entire process will happen without my intending it, but sometimes I'm deliberately trying to harness this subconscious problem solving ability. And I don't think I'm doing anything special here; I think everyone has these sort of Eureka! moments from time to time. Once you remove the theology from it, prayer is really a similar process.
Once I noticed this, I began seeing similar patterns throughout my life and even history. For example, Archimedes. He was tasked with determining whether a given substance was gold or not (at the time, this was a true challenge). He toiled and slaved at the problem for weeks, pushing all other aspects of his life away. Finally, his wife, sick of her husband's dirty appearance and bad odor, made him take a bath. As he stepped into the tub, he noticed the water rising and had a revelation... this displacement could be used to accurately measure volume, which could then be used to determine density and ultimately whether or not a substance was gold. The moral of the story: Listen to your wife!3
Have I actually answered the question? Well, I may have veered off track a bit, but I find the process of thinking to be interesting and quite mysterious. After all, whatever it is that's going on in our noggins isn't understood very well. It might just be indistinguishable from magic...
1 - Note to self: go see The Illusionist! Also, The Prestige looks darn good. Why does Hollywood always produce these things in pairs? At least it looks like there's good talent involved in each of these productions...
2 - Oddly enough, I discoved this nugget on another trip through the library stacks while I was supposed to be studying in college. Just thought I should call that out in light of recent posting...
3 - Yes, this is an anecdote from the movie Pi.
Sunday, May 14, 2006
The Victorian Internet and Centralized Solutions
A few weeks ago, I wrote a post about how the internet affects our ability to think, pulling from Nicholas Carr's post on internet and mindlessness. I disagreed with Carr's skepticism, and in the comments, Samael noted that Carr was actually using a common form of argument.
This seems to be a pretty common form of argument, though.Carr's argument is in the same form - the sea of information made possible by the internet is to blame for a deterioration in our ability to think. I rejected that because of choice - technology does not force us to think poorly; we choose how we interact with technology (especially on-demand technology like the internet). It's possible to go overboard, but there's nothing forcing that to happen. It's our choice. In any case, this isn't the first time a technology that lead to a massive increase in communication caused these problems. In his book The Victorian Internet, Tom Standage explores the parallels between the telegraph networks of the nineteenth century and the internet of today. Jon Udell summarizes the similarities:
A 19th-century citizen transported to today would be amazed by air travel, Standage suggests, but not by the Internet. Been there, done that.All too often, when I listen to someone describe a problem, I feel a sensationalistic vibe. It's usually not that I totally disagree that something is a problem, but the more I read of history and the more I analyze certain issues, I find that much of what people are complaining about today isn't all that new. Yes, the internet has given rise to certain problems, but they're not really new problems. They're the same problems ported to a new medium. As shown in the quote above, many of the internet's problems also affected telegraphy nearly two centuries ago (I'd wager that the advance of the printing press lead to similar issues in its time as well). That doesn't make them less of a problem (indeed, it actually means that the problem is not easily solved!), but it does mean we should perhaps step back and maybe turn down the rhetoric a bit. These are extremely large problems and they're not easily solved.
It almost feels like we expect there to be a simple solution for everything. I've observed before that there is a lot of talk about problems that are incredibly complex as if they really aren't that complex. Everyone is trying to "solve" these problems, but as I've noted many times, we don't so much solve problems as we trade one set of problems for another (with the hope that the new set of problems is more favorable than the old). What's more, we expect these "solutions" to come at a high level. In politics, this translates to a Federal solution rather than relying upon state and local solutions. A Federal law has the conceit of being universal and fair, but I don't think that's really true. When it comes to large problems, perhaps the answer isn't large solutions, but small ones. Indeed, that's one of the great things about the structure of our government - we have state and local governments which (in theory) are more responsive and flexible than the Federal government. I think what you find with a centralized solution is something that attempts to be everything to everyone, and as a result, it doesn't help anyone.
For example, Bruce Schneier recently wrote about identity theft laws.
California was the first state to pass a law requiring companies that keep personal data to disclose when that data is lost or stolen. Since then, many states have followed suit. Now Congress is debating federal legislation that would do the same thing nationwide.It's a net loss because the state laws are stricter. This also brings up another point about centralized systems - they're much more vulnerable to attack than a decentralized or distributed system. It's much easier to lobby against (or water down) a single Federal law than it is to do the same thing to 50 state laws. State and local governments aren't perfect either, but their very structure makes them a little more resilient. Unfortunately, we seem to keep focusing on big problems and proposing big centralized solutions, bypassing rather than taking advantage of the system our founding fathers wisely put into place.
Am I doing what I decry here? Am I being alarmist? Probably. The trend for increasing federalization is certainly not new. However, in an increasingly globalized world, I'm thinking that resilience will come not from large centralized systems, but at the grassroots level. During the recent French riots, John Robb observed:
Resilience isn't limited to security. It is also tied to economic prosperity. There aren't any answers to this on the national level. The answer is at the grassroots level. It is only at that level that you get the flexibility, innovation, and responsiveness to compete effectively. The first western country that creates a platform for economic interop and at the same time decentralizes power over everything else is going to be a big winner.None of this is to say that grassroots efforts are perfect. There are a different set of issues there. But as I've observed many times in the past, the fact that there are issues shouldn't stop us. There are problems with everything. What's important is that the new issues we face be more favorable than the old...
Saturday, May 13, 2006
Technology Link Dump
My last post on technological change seems to have struck a nerve and I've been running across a lot of things along similar lines this week... Here are a few links on the subject:
Sunday, May 07, 2006
Is Technology Advancing or Declining?
In Isaac Asimov's novel Prelude to Foundation, an unknown mathematician named Hari Seldon travels from his podunk home planet to the Galactic Empire's capital world to give a presentation on a theoretical curiosity he dubs psychohistory (which is essentially a way to broadly predict the future). Naturally, the potential for this theory attracts the powerful, and Seldon goes on the run with the help of a journalist friend named Chetter Hummin. Hummin contends that "The Galactic Empire is Dying." Seldon is frankly surprised by this thesis and eventually asks for an explanation:
... "all over the Galaxy trade is stagnating. People think that because there are no rebellions at the moment and because things are quiet that all is well and that the difficulties of the past few centuries are over. However, political infighting, rebellions, and unrest are all signs of a certain vitality too. But now there's a general weariness. It's quiet, not because people are satisfied and prosperous, but because they're tired and have given up."Hummin acknowledges that he could be wrong (partly out of a desire to manipulate Seldon to develop psychohistory so as to confirm whether or not the Empire really is dying), but those who've read the Foundation Novels know he's right.
The reasons for this digression into decaying Galactic Empires include my affinity for quoting fiction to make a point and a post by Ken at ChicagoBoyz regarding technological stagnation (which immediately made me think of Asimov's declining Empire). Are we in a period of relative technological stagnation? I'm hardly an expert, but I have a few thoughts.
First, what constitutes advance or stagnation? Ken points to a post that argues that the century of maximum change is actually the period 1825-1925. It's an interesting post, but it only pays lipservice to the changes he sees occurring now:
From time to time I stumble across articles by technology-oriented writers claiming that we're living in an era of profound, unprecedented technological change. And their claim usually hinges on the emergence of the computer.The post seems to focus on disruptive changes, but if something is not disruptive, does that really mean that technology is not advancing? And why are changes in transportation capabilities (for instance) more important than communication, biology, or medicine? Also, when we're talking about measuring technological change over a long period of time, it's worth noting that advances or declines would probably not move in a straight line. There would be peaks where it seems like everything is changing at once, and lulls when it seems like nothing is changing (even though all the pieces may be falling into place for a huge change).
Most new technological advances are really abstracted efficiencies - it's the great unglamorous march of technology. They're small and they're obfuscated by abstraction, thus many of the advances are barely noticed. Computers and networks represent a massive improvement in information processing and communication capabilities. I'd wager that even if we are in a period of relative technological stagnation (which I don't think we are), we're going to pull out of it in relatively short order because the advent of computers and networks means that information can spread much faster than it could in the past. A while ago, Steven Den Beste argued that the four most important inventions in history are: "spoken language, writing, movable type printing and digital electronic information processing (computers and networks)."
When knowledge could only spread by speech, it might take a thousand years for a good idea to cross the planet and begin to make a difference. With writing it could take a couple of centuries. With printing it could happen in fifty years. With computer networks, it can happen in a week if not less. ... That's a radical change in capability; a sufficient difference in degree to represent a difference in kind. It means that people all over the world can participate in debate about critical subjects with each other in real time.Indeed, part of the reason technologists are so optimistic about the rate of technological change is that we see it all the time on the internet. We see some guy halfway across the world make an observation or write a script, and suddently it shows up everywhere, spawning all sorts of variants and improvements. When someone invents something these days, it only takes a few days for it to be spread throughout the world and improved upon.
Of course, there are many people who would disagree with Ken's assertion that we're in a period of technological stagnation. People like Ray Kurzweil or Vernor Vinge would argue that we're on the edge of a technological singularity - that technology is advancing so quickly that we can't quantify it, and that we're going to eventually use technology to create an entity with greater than human intelligence.
I definitely think there is a problem with determining the actual rate of change. As I mentioned before, what qualifies as a noteworthy change? It's also worth noting that long-term technological effects are sometimes difficult to forecast. Most people picture the internet as being a centrally planned network, but it wasn't. Structurally, the internet is more like an evolving ecosystem than anything that was centrally designed. Those who worked on the internet in the 1960s and 1970s probably had no idea what it would eventually become or how it would affect our lives today. And honestly, I'm not sure we know today what it will be like in another 30 years...
One of the reasons I quoted Asimov's novel at the beginning of this post is that I think he captured what a technologically declining civilization would be like. The general weariness, the apathy, and the lack of desire to even question why. Frankly, I find it hard to believe that things are slowing down these days. Perhaps we're in a lull (it sure doesn't seem like it though), but I can see that edge, and I don't see weariness in those that will take us there...
Posted by Mark on May 07, 2006 at 06:59 PM .: link :.
Thursday, February 09, 2006
The Art of Rainmaking by Guy Kawasaki: An interesting article about salesmanship and what is referred to as "rainmaking." Kawasaki lists out several ways to practice the art of rainmaking, but this first one caught my eye because it immediately reminded me of Neal Stephenson's Cryptonomicon, and regular readers (all 5 of you) know I can't resist a Stephenson reference.
“Let a hundred flowers blossom.” I stole this from Chairman Mao although I'm not sure how he implemented it. In the context of capitalism (Chairman Mao must be turning over in his grave), the dictum means that you sow seeds in many markets, see what takes root, and harvest what blooms. Many companies freak out when unintended customers buy their product. Many companies also freak out when intended customers buy their product but use it in unintended ways. Don't be proud. Take the money.This immediately reminded me of the data haven (a secure computer system that is protected by it's lack of governmental oversight as well as technical means like encryption) in the "modern-day" segments of Cryptonomicon. Randy Waterhouse works for the company that's attempting to sett up a data haven, and he finds that the most of his customers want to use the data haven to store money. Pretty straightforward, right? Well, most of the people who want to store their money their are criminals of the worst sort. I guess in that particular case, there is reason to freak out at these unexpected customers, but I thought the reference was interesting because while there may be lots of legitimate uses for a data haven, the criminal element would almost certainly be attracted to a way to store their drug money (or whatever) with impugnity (that and probably spam, pornography, and gambling). Like all advances in technology, the data haven could be used for good or for ill...
Sunday, February 05, 2006
A Spectrum of Articles
When you browse the web often, especially when you're looking at mostly weblogs, you start to see some patterns emerging. A new site is discovered, then propagates throughout the blogosphere in fairly short order. I'm certainly no expert at spotting such discoveries, but one thing I've noticed being repeatedly referenced this past week is the IEEE Spectrum (a magazine devoted to electrical engineering). I've seen multiple blogs referencing multiple articles from this magazine, though I can't think of a single reference in the past. Here's a few articles that seem interesting:
Sunday, January 01, 2006
Analysis and Ignorance
A common theme on this blog is the need for better information analysis capabilities. There's nothing groundbreaking about the observation, which is probably why I keep running into stories that seemingly confirms the challenge we're facing. A little while ago, Boing Boing pointed to a study on "visual working memory" in which the people who did well weren't better at remembering things than other people - they were better at ignoring unimportant things.
"Until now, it's been assumed that people with high capacity visual working memory had greater storage but actually, it's about the bouncer – a neural mechanism that controls what information gets into awareness," Vogel said.In Feedback and Analysis, I examined an aspect of how the human eye works:
So the brain gets some input from the eye, but it's sending significantly more information towards the eye than it's receiving. This implies that the brain is doing a lot of processing and extrapolation based on the information it's been given. It seems that the information gathering part of the process, while important, is nowhere near as important as the analysis of that data. Sound familiar?Back in high school (and to a lesser extent, college), there were always people who worked extremely hard, but still couldn't manage to get good grades. You know, the people who would spend 10 hours studying for a test and still bomb it. I used to infuriate these people. I spent comparatively little time studying, and I did better than them. Now, there were a lot of reasons for this, and most of them don't have anything to do with me being smarter than anyone else. One thing I found was that if I paid attention in class, took good notes, and spent an honest amount of effort on homework, I didn't need to spend that much time cramming before a test (shocking revelation, I know). Another thing was that I knew what to study. I didn't waste time memorizing things that weren't necessary. In other words, I was good at figuring out what to ignore.
Analysis of the data is extremely important, but you need to have the appropriate data to start with. When you think about it, much of analysis is really just figuring out what is unimportant. Once you remove the noise, you're left with the signal and you just need to figure out what that signal is telling you. The problem right now is that we keep seeing new and exciting ways to collect more and more information withought a corresponding increase in analysis capabilities. This is an important technical challenge that we'll have to overcome, and I think we're starting to see the beginnings of a genuine solution. At this point another common theme on this blog will rear its ugly head. Like any other technological advance, systems that help us better analyze information will involve tradeoffs. More on this subject later this week...
Sunday, December 11, 2005
Looking into the trilemma subject from last week's entry, I stumbled across Jason Kottke's post about what he calls a "Pick Two" system, using the "good, fast, or cheap, pick two" example to start, but then listing out a whole bunch more:
Elegant, documented, on time.I don't know if I agree with all of those, but regardless of their authenticity, Kottke is right to question why the "Pick Two" logic appears to be so attractive. Indeed, I even devised my own a while back when I was looking at my writing habits.
Why is "pick two out of three" the rule? Why not "one out of two" or "four out of six"? Or is "pick two out of three" just a cultural assumption?He also wonders if there is some sort of underlying scientific or economic relationship at work, but was unable to find anything that fit really well. Personally, I found the triangle to be closest to what he was looking for. In a triangle, the sum of the interior angles is always 180 degrees. If you "pick two" of the angles, you know what the third will be. Since time and money are both discrete, quantifiable values, you should theoretically be able to control the quality of your project by playing with those variables.
In a more general sense, I tend to think of a system with three main components as being inherently stable. I think this is because such a system is simple, yet complex enough to allow for a lot of dynamism. As one of the commmenters on Kottke's post noted:
Seems like two out of three is the smallest tradeoff that's interesting. One out of two is boring. One out of three doesn't satisfy. Two out of three allows the chooser to feel like s/he is getting something out of the tradeoff (not just 50/50).And once you start getting larger than three, the system begins to get too complex. Tweaking one part of the system has progressively less and less predictable results the bigger the system gets. The good thing about a system with three major components is that if one piece starts acting up, the other two can adjust to overcome the deficiency. In a larger system, the potential for deadlock and unintended consequences begins to increase.
I've written about this stability of three before. The steriotypical example of a triangular system is the U.S. Federal government:
One of the primary goals of the American Constitutional Convention was to devise a system that would be resistant to tyranny. The founders were clearly aware of the damage that an unrestrained government could do, so they tried to design the new system in such a way that it wouldn't become tyrannical. Democratic institions like mandatory periodic voting and direct accountability to the people played a large part in this, but the founders also did some interesting structural work as well.Another great example of how well a three part system works is a classic trilemma: "Rock, Paper, Scissors."
Sunday, December 04, 2005
The Design Trilemma
I've been writing about design and usability recently, including a good example with the iPod and a case where a new elevator system could use some work. Naturally, there are many poorly designed systems out there, and they're easy to spot, but even in the case of the iPod, which I think is well designed and elegant, I was able to find some things that could use improvement. Furthermore, I'm not sure there's all that much that can really be done to improve the iPod design without removing something that detracts more from the experience. As I mentioned in that post, a common theme on this blog has always been the trade-offs inherent in technological advance: we don't so much solve problems as we trade one set of disadvantages for another, in the hopes that the new set is more favorable than the old.
When confronted with an obviously flawed system, most people's first thought is probably something along the lines of: What the hell were they thinking when they designed this thing? Its certainly an understandable lamentation, but after the initial shock of the poor experience, I often find myself wondering what held the designers back. I've been involved in the design of many web applications, and I sometimes find the end result is different from what I originally envisioned. Why? Its usually not that hard to design a workable system, but it can become problematic when you consider how the new system impacts existing systems (or, perhaps more importantly, how existing systems impact new ones). Of course, there are considerations completely outside the technical realm as well.
There's an old engineering aphorism that says Pick two: Fast, Cheap, Good. The idea is that when you're tackling a project, you can complete it quickly, you can do it cheaply, and you can create a good product, but you can't have all three. If you want to make a quality product in a short period of time, it's going to cost you. Similarly, if you need to do it on the cheap and also in a short period of time, you're not going to end up with a quality product. This is what's called a Trilemma, and it has applications ranging from international economics to theology (I even applied it to writing a while back).
Dealing with trilemmas like this can be frustrating when you're involved in designing a system. For example, a new feature that would produce a tangible but relatively minor enhancement to customer experience would also require a disproportionate amount of effort to implement. I've run into this often enough to empathize with those who design systems that turn out horribly. Not that this excuses design failures or that this is the only cause of problems, but it is worth noting that the designers aren't always devising crazy schemes to make your life harder...
Sunday, November 13, 2005
After several weeks of using my new iPod (yes, I'm going to continue rubbing it in for those who don't have one), I've come to realize that there are a few things that are *gasp* not perfect about the iPod. A common theme on this blog has always been the tradeoffs inherent in technological advance: we don't so much solve problems as we trade one set of disadvantages for another, in the hopes that the new set is more favorable than the old.
Don't get me wrong, I love the iPod. It represents a gigantic step forward in my portable media capability, but it's not perfect. It seems that some of the iPod's greatest strengths are also it's greatest weaknesses. Let's look at some considerations:
Sunday, November 06, 2005
Elevators & Usability
David Foster recently wrote a post about a new elevator system:
One might assume that elevator technology is fairly static, but then one would be wrong. The New York Times (11/2) has an article about significant improvements in elevator control systems. The idea is that you select your floor before you get on the elevator, rather than after, thereby allowing the system to dispatch elevators more intelligently--a 30% reduction in average trip time is claimed. ... All good stuff; shorter waiting times and presumably lower energy consumption as well.(NYT article is here) Foster has some interesting comments on the management types who want to use this system to avoid being in an elevator with the normal folks, but the story caught my attention from a different angle.
I recently attended the World Usability Day event in Philadelphia, and the keynote speaker (Tom Tullis, of Fidelity Investments) started his presentation with a long anecdote concerning this new elevator technology. It seems that while this technology may have good intentions, it's execution could use a little work.
Perhaps it was just the particular implementation at the building he went to, but the system installed there was extremely difficult to use for a first time user. First, the new system wasn't called out very much, so Tullis had actually gotten into one of the elevators and was flummoxed at the lack of buttons inside. Eventually, after riding the elevator up and then back down to the lobby, he noticed a keypad next to the elevator he had gotten into. So he understandably assumed that he should simply enter the desired floor there, figuring that the elevator would then open and take him to that floor. He typed in his destination floor, and was greeted with a scren that had a large "E" on it (there's an image of this on the right, but the presentation has lots of images and more information on the evolution of the Elevator). Obviously an error, right? Well, no. Tullis eventually found a little sign in the lobby that had a 6 page (!) manual explaining how the elevators work, and it turns out that each elevator cab has a letter assigned to it, and when you enter your floor, it assigns you to one of the elevators. So "E" was referring to the "E" cab, not an error. Now armed with the knowledge of how the system works, Tullis was able to make it to his meeting (10 minutes late).
Naturally, I think this is a bit of an extreme case (though there were a few other bad things about his experience that I didn't even mention). The system was brand new and the building hadn't yet converted all of their elevators to the new system, so it seems obvious that the system usability would improve over time. There are several things that could make that experience easier:
Sunday, October 23, 2005
MP3 Player Update
About a month ago, I wrote about MP3 Players in an attempt to figure out which player was best for me. At the time, I was leaning towards the 20GB iPod Photo, but the Cowon iAudio X5 was giving me serious pause. As such, I sort of just spun my wheels until I heard that Apple was going to announce another change to their iPod line, which ended up being the new iPod Video. This upgrade to the iPod line made my decision a lot easier, and I bought one the night it was announced. It seems that procrastination actually paid off for me.
After 5 days of steady use, I'm quite pleased with the iPod. It's easy to use, elegant, and it does everything I need it to do (and more). ArsTechnica has a thorough review, and I won't bother repeating most of it. The one thing I'll talk about is the "scratching" issue (as the Ars reviewer didn't mention much about that), which seems to be so bad with the iPod nano that many are assuming that the new black iPods will suffer from the same issue. So far, I've yet to get any scratches on my shiny new black iPod, but I have to admit that I'm a careful guy and I generally keep it in the soft carrying case that came with it when I'm not using it. The black model does seem to make fingerprints and the like much more visable, but that's not that big of a deal to me, as it cleans up easy.
The battery life seems excellent for playing music, but it may be a bit lacking when it comes to video. The 30GB model only has 2 hours of video playback, which would be enough for a short movie during a flight, but that's a mixed blessing in my mind, as I wouldn't then be able to listen to music for the remainder of a longer flight... I did download an episode of Lost, and the video itself does appear crisp and clear and surprisingly watchable (considering the relatively small size of the screen). It only plays .m4v files, which is mildly annoying, as most applications (by which I mean the ones I was able to find with 2 minutes research) that encode in .m4v are only for the Mac. Evan Kirchhoff did an interesting comparison on his blog: Video ITunes vs. Piracy. The ITunes version downloaded faster and took up less space, but was also lower quality (in terms of both video and audio) and the compression wasn't as good either (and the pirated version was also widescreen). I think this is indicative of the fact that the new iPod isn't really the Video iPod, it's an iPod with video. Because of the small screen size, tiny CPU, and limited storage, I think the ITunes downloads make sense right now. As time goes on, I'm sure we'll see more advanced offerings, including higher quality downloads (perhaps even multiple encodings). In any case, the video functionality wasn't that important to me, but it is quite a nice perk (and it may come in useful at some point).
As for getting the iPod up and running in my car, I chose the Monster Cable iCarPlay Wireless FM Transmitter. I've had less time to evaluate this, but so far I've gotten a mediocre and uneven performance out of this. Sometimes it's excellent, but sometimes there is a lot of static (and changing stations doesn't seem to help). Part of the problem is that I'm in the Philadelphia area, so there aren't very many available stations (so far, 105.9 seems to work best for me). I suspect this is about as good as a FM transmitter of any kind would get for me, and I like the Monster's setup (3 preset stations) and when it's working well, it works really well. Naturally, one of those hard-wired systems that ties the ipod into your stereo controls would be ideal, but they're a bit too expensive ($200+) right now.
All in all, I'm quite happy with my new iPod...
Sunday, October 16, 2005
Operation Solar Eagle
One of the major challenges faced in Iraq is electricity generation. Even before the war, neglect of an aging infrastructure forced scheduled blackouts. To compensate for the outages, Saddam distributed power to desired areas, while denying power to other areas. The war naturally worsened the situation (especially in the immediate aftermath, as there was no security at all), and the coalition and fledgling Iraqi government have been struggling to restore and upgrade power generation facilities since the end of major combat. Many improvements have been made, but attacks on the infrastructure have kept generation at or around pre-war levels for most areas (even if overall generation has increased, the equitable distribution of power means that some people are getting more than they used to, while others are not - ironic, isn't it?).
Attacks on the infrastructure have presented a significant problem, especially because some members of the insurgency seem to be familiar enough with Iraq's power network to attack key nodes, thus increasing the effects of their attacks. Consequently, security costs have gone through the roof. The ongoing disruption and inconsistency of power generation puts the new government under a lot of pressure. The inability to provide basic services like electricity delegitimizes the government and makes it more difficult to prevent future attacks and restore services.
When presented with this problem, my first thought was that solar power may actually help. There are many non-trivial problems with a solar power generation network, but Iraq's security situation combined with lowered expectations and an already insufficient infrastructure does much to mitigate the shortcomings of solar power.
In America, solar power is usually passed over as a large scale power generation system, but things that are problems in America may not be so problematic in Iraq. What are the considerations?
As shown above, there are obviously many challenges to completing such a project, most specifically with respect to economic feasibility, but it seems to me to be an interesting idea. I'm glad that there are others thinking about it as well, though at this point it would be really nice to see something a little more concrete (or at least an explanation as to why this wouldn't work).
Sunday, September 25, 2005
Feedback and Analysis
Jon Udell recaps some of the events from the Accelerating Change conference. Lots of interesting info on the Singularity theory, as both Vernor Vinge and Ray Kurzweil were in attendance, but what caught my eye was this description of how the eye works with the brain:
The example was a six-layered column in the neocortex connected to a 14x14-pixel patch of the retina. There are, Olshausen said, about 100,000 neurons in that chunk of neocortex. That sounds like a lot of circuitry for a few pixels, and it is, but we actually have no idea how much circuitry it is. ...I found this quite simply amazing. The folks at the conference were interested in this because it means we're that much closer to understanding, and thus being able to artificially reproduce, the brain. However, this has other implications as well.
So the brain gets some input from the eye, but it's sending significantly more information towards the eye than it's receiving. This implies that the brain is doing a lot of processing and extrapolation based on the information it's been given. It seems that the information gathering part of the process, while important, is nowhere near as important as the analysis of that data. Sound familiar? Honestly, I haven't been keeping track of intelligence agencies of late, but the focus on data gathering without a corresponding focus on analysis certainly used to be a problem, and I think this finding is just another piece of evidence that says we need to focus on analysis.
This also applies to the business world. Lots of emphasis is placed on collecting sales data, especially on the internet, but unless you have a large dedicated staff to analyze that data, you won't end up with much in the way of actionable conclusions...
Sunday, September 18, 2005
So I have recently come into the market for an MP3 Player. I know, probably a few years too late, but I figured it's time to take the plunge, as the CD changer in my car decided to stop working and a few hours of listening to the dreck that is referred to as "radio" these days is enough to motivate me to spend tons of money to just make the pain stop.
So the primary goal for this device is going to be an MP3 Player. Naturally, there are all sorts of other features and gadgets that come along with most of the good players on the market, but I consider most of that stuff to be nice to have, but not a necessity. There has to be a way to get the player working in my car (I'm not too picky about that - those FM transmitters should do the trick) and I'll probably be carting the thing around everywhere as well. Rather than run through all the features, I'll run through the candidates and their features. As of now, I'm leaning towards the 20GB iPod Photo.
Sunday, August 21, 2005
I'm currently reading Vernor Vinge's A Deepness in the Sky. It's an interesting novel, and there are elements of the story that resemble Vinge's singularity. (Potential spoilers ahead) The story concerns two competing civilizations that travel to an alien planet. Naturally, there are confrontations and betrayals, and we learn that one of the civilizations utilizes a process to "Focus" an individual on a single area of study, essentially turning them into a brilliant machine. Naturally, there is a lot of debate about the Focused, and in doing so, one of the characters describes it like this:
... you know about really creative people, the artists who end up in your history books? As often as not, they're some poor dweeb who doesn't have a life. He or she is just totally fixated on learning everything about some single topic. A sane person couldn't justify losing friends and family to concentrate so hard. Of course, the payoff is that the dweeb may find things or make things that are totally unexpected. See, in that way, a little of Focus has always been part of the human race. We Emergents have simply institutionalized this sacrifice so the whole community can benefit in a concentrated, organized way.Debate revolves around this concept because people living in this Focused state could essentially be seen as slaves. However, the quote above reminded me of a post I wrote a while ago called Mastery:
There is an old saying "Jack of all trades, Master of none." This is indeed true, though with the demands of modern life, we are all expected to live in a constant state of partial attention and must resort to drastic measures like Self-Censorship or information filtering to deal with it all. This leads to an interesting corollary for the Master of a trade: They don't know how to do anything else!In that post, I quoted Isaac Asimov, who laments that he's clueless when it comes to cars, and relates a funny story about what happened when he once got a flat tire. I wondered if that sort of mastery was really a worthwhile goal, but the artificually induced Focus in Vinge's novel opens the floor up to several questions. Would you volunteer to be focused in a specific area of study, knowing that you would basically do that and only that? No family, no friends, but only because you are so focused on your studies (as portrayed in the novel, doing work in your field is what makes you happy). What if you could opt to be focused for a limited period of time?
There are a ton of moral and ethical questions about the practice, and as portrayed in the book, it's not a perfect process and may not be reversible (at least, not without damage). The rewards would be great - Focusing sounds like a truly astounding feat. But would it really be worth it? As portrayed in the book, it definitely would not, as those wielding the power aren't very pleasant. Because the Focused are so busy concentrating on their area of study, they become completely dependent on the non-Focused to guide them (it's possible for a Focused person to become too-obsessed with a problem, to the point where physical harm or even death can occur) and do everything else for them (i.e. feed them, clean them, etc...) Again, in the book, those who are guiding the Focused are ruthless exploiters. However, if you had a non-Focused guide who you trusted, would you consider it?
I still don't know that I would. While the results would surely be high quality, the potential for abuse is astounding, even when it's someone you trust that is pulling the strings. Nothing says they'll stay trustworthy, and it's quite possible that they could be replaced in some way by someone less trustworthy. If the process was softened to the point where the Focused retains at least some control over their focus (including the ability to go in and out), then this would probably be a more viable option. Fortunately, I don't see this sort of thing happening in the way proposed by the book, but other scenarios present interesting dilemmas as well...
Sunday, July 03, 2005
Steven Spielberg's War of the Worlds is a pretty tense affair. The director knows how to lay on the suspense and he certainly applies that knowledge liberally in the film. It's a good thing too, because when he allows a short breather, your mind immediately starts asking questions that can only have embarrassingly illogical answers.
Luckily, Spielberg's version of the infamous H.G. Wells novel focuses on one character, not the big picture of the story. This relegates the aliens in the film to a MacGuffin, a mostly unexplained excuse to place pressure on the protagonist Ray Ferrier (played competently by Tom Cruise). In this respect, it resembles M. Night Shyamalan's Signs more than other recent big budget disaster films like Independence Day. Its pacing and relentless tension make the film feel more like horror than science fiction. Unfortunately, there's enough pseudo-explanations and speculations about the aliens to strain the suspension of disbelief that is required for this film to work. I've found that I generally have more movie-going goodwill than others (i.e. letting art be art), so I didn't mind the lack of details and even some of the odd quirky logic that seems to drive the plot, which really focuses on the aforementioned Ray's relationship with his kids (and not the aliens). Ultimately, there's nothing special about the story, but in the hands of someone as proficient as Speilberg, it works well enough for me. It's visually impressive and quite intense.
Besides, it's not like the concept itself makes all that much sense. In 1898, Wells' novel was probably seen as somewhat realistic, though the Martians-as-metaphor themes didn't escape anyone. In 1938, Orson Welles's infamous radio broadcast of the story scared the hell out of listeners who thought that an actual invasion was occurring. Today, the concept of an advanced alien civilization invading earth has lost much of its edge, perhaps because we understand the science of such a scenario much better than we used to. If you're able to put aside the nagging questions, it still holds a certain metaphorical value, but even that is starting to get a little old.
No explicit motivation is attributed to the aliens in Spielberg's film, but in other stories it generally comes down to the aliens' lust for resources ("They're like locusts. They're moving from planet to planet... their whole civilization. After they've consumed every natural resource they move on..."). This, of course, makes no sense.
Space is big. Huge. From what we know of life in the universe, it appears to be quite rare and extremely spread out. Travel between civilizations may be possible due to something exotic like a wormhole or faster-than-light travel, but even if that were possible (and that's a big if), traversing the distances involved in the usually huge and powerful alien craft is still bound to expend massive amounts of energy. And for what? Resources? What kinds of resources? Usually "resources" is code for energy, but that doesn't make much sense to me. They'd have to have found something workable (perhaps fusion) just to make the trip to Earth, right? In the miniseries V the aliens are after water, which is an impressively ignorant motivation (hydrogen and oxygen are among the universe's most abundant elements and water itself has been observed all over our galaxy). Perhaps the combination of water, mineral resources, a temperate climate, a protective and varied atmosphere, animal and plant life, and relatively stable ecosystems would make Earth a little more attractive.
What else makes Earth so special? There would have to be some sort of resource we have that most other planets don't. Again, Earth is one of the rare planets capable of supporting life, but we can infer that they're not looking for life itself (their first acts invariably include an attempt to exterminate all life they come accross. In War of the Worlds, the Alien tripods start by vaporizing every human they see. Later in the film, we see them sort of "eating" humans. This is a somewhat muddled message, to say the least). And whatever this resource is, it would have to justify risking a war with an indigenous intelligent life form. Granted, we probably wouldn't stand much of a chance against their superior technology, but at the very least, our extermination would require the expenditure of yet more energy (further discrediting the notion that what the aliens are after is an energy source). Plus, it's not like we've left the planet alone - we're busy using up the resources ourselves. Also, while our weapons may be no match for alien defenses, they'd be quite sufficent to destroy much of the planet's surface out of spite, rendering the alien invasion moot.
The only thing that even approaches making any sort of sense is that they want Earth as a new home for themselves. As one of the few planets capable of supporting life, I suppose it could be valuable in that respect. Indeed, in Wells' novel, the Martians attacked earth because their planet was dying. Spielberg's film seems determined to kinda-sorta keep true to the novel, except that the aliens appear to have planned this countless years ago, which makes it seem less likely. But again, why risk invading an already inhabited planet? Some stories have emphasized that the aliens were doing their equivalent of terraforming (this is implied in War of the Worlds when Ray looks out over a bizarrely changed landscape filled with red weeds), which is a good idea, but it still doesn't explain why Earth would be a target. From all appearances, there are plenty of empty planets out there...
So the concept itself is a bit tired to start with. Movies that aren't explicit invasions involving a civilization like our own fare a little better. Alien & Aliens do a good job of this, as have several other films.
In any case, War of the Worlds is still a reasonably good watch, so long as you don't mind the lack of scientific rigor. It's a visually impressive film, with a number of sequences that stand out. And he really doesn't give you all that much time to think about all the flaws...
Sunday, April 10, 2005
Cell Phone Update
Because I know everyone is on the edge of their seat after last week's entry, I ended up going with the Nokia 3120. It's compact, light and has a reasonably long talk time. As far as talk time goes, the Sony Ericsson T237 seems to be king (at least, going by the statistics), but I didn't like the keypad (nor did I particularly love the screen or the controls). The Nokia was better in this respect, and I've always been happy with Nokia phones.
It's a bit of a low end phone, but the high end phones don't seem to have gotten to a point where it's really worth it just yet. The Sony Ericsson W800i seems really interesting. I'm in the market for an MP3 player as well, so it would be really nice to get that functionality with the phone. The cameras in phones are getting better and better as well (to the point where they're better than my digital camera, which is getting pretty old). Hitting three birds with one stone would be really nice, but unfortunately, the W800i isn't out yet (and some are reporting that it won't be released in the States at all), would probably cost a fortune even if it was available, and I'm sure that better models will eventually become available anyway, which is why I don't mind getting the low end model now...
Anyway, thanks for everyone's help. It was very... helpful. Um, yeah. Thanks.
Posted by Mark on April 10, 2005 at 07:22 PM .: link :.
Sunday, April 03, 2005
So I'm in the market for a new cell phone. I'm no expert, but I've been reading up on the subject this weekend. I actually use my cell phone as my primary phone (I don't have a land line), so I might consider going for something other than a base model... but it seems that more advanced phones are loaded with features that I don't really need. What I really want out of the new phone is:
I'm not sure which provider I'm going to go with either, but I'll have to see what my options are. My employer had a deal with AT&T Wireless, so that is what I have now, but AT&T is now Cingular, so I'm not sure if that relationship still exists (or if we switched to something else). I would prefer a CDMA based phone, but several friends have had bad experiences with Sprint and Verizon is a little too expensive for me, especially if I can get a good deal with Cingular (which uses GSM).
In looking at the phones available for Cingular, I'm not especially fond of any available options. The closest thing to what I want is the Sony Ericsson T237 or the Nokia 3120. Both are pretty low end models, but it seems like the big differences in the next steps up are the extraneous features I don't really need (like the camera, Bluetooth, etc...) As of right now, I'm leaning towards the Sony Ericsson T237 (or the Sony Ericsson T637, which is nicer, but is also more expensive and has lots of features I don't especially need). It's nice and small, it apparently has a fantastic battery life, and decent call quality. Most reviews I've seen give it reasonable marks and recommend it as a good no-frills phone. Some user reviews give it pretty bad marks though, which is why I'm considering the T637 (despite it's extra features).
Of course, I'll need to look at these things in the store before I really make my decision, but any advice on cell-phone buying would be much appreciated. I haven't really looked into Verizon phones yet, but I'm going to give it consideration...
Update: In researching and thinking about this a little more, I think some of the more feature-rich phones might be worth considering, despite my initial distaste. So for now, the front-runner is the T637. We shall see. Suggestions or advice still welcome...
Posted by Mark on April 03, 2005 at 04:35 PM .: link :.
Sunday, March 27, 2005
Slashdot links to a fascinating and thought provoking one hour (!) audio stream of a speech "by futurist and developmental systems theorist, John Smart." The talk is essentially about the future of technology, more specifically information and communication technology. Obviously, there is a lot of speculation here, but it is interesting so long as you keep it in the "speculation" realm. Much of this is simply a high-level summary of the talk with a little commentary sprinkled in.
He starts by laying out some key motivations or guidelines to thinking about this sort of thing, and he paraphrases David Brin (and this is actually paraphrasing Smart):
We need a pragmatic optimism, a can-do attitude, a balance between innovation and preservation, honest dialogue on persistent problems, ... tolerance of the imperfect solutions we have today, and the ability to avoid both doomsaying and a paralyzing adherence to the status quo. ... Great input leads to great output.So how do new systems supplant the old? They do useful things with less matter, less energy, and less space. They do this until they reach some sort of limit along those axes (a limitation of matter, energy, or space). It turns out that evolutionary processes are great at this sort of thing.
Smart goes on to list three laws of information and communication technology:
This about halfway through the speech, and he goes on to list many examples and he explores some more interesting concepts. Here are some bits I found interesting.
Posted by Mark on March 27, 2005 at 08:40 PM .: link :.
Sunday, May 02, 2004
The Unglamorous March of Technology
We live in a truly wondrous world. The technological advances over just the past 100 years are astounding, but, in their own way, they're also absurd and even somewhat misleading, especially when you consider how these advances are discovered. More often than not, we stumble onto something profound by dumb luck or by brute force. When you look at how a major technological feat was accomplished, you'd be surprised by how unglamorous it really is. That doesn't make the discovery any less important or impressive, but we often take the results of such discoveries for granted.
For instance, how was Pi originally calculated? Chris Wenham provides a brief history:
So according to the Bible it's an even 3. The Egyptians thought it was 3.16 in 1650 B.C.. Ptolemy figured it was 3.1416 in 150 AD. And on the other side of the world, probably oblivious to Ptolemy's work, Zu Chongzhi calculated it to 355/113. In Bagdad, circa 800 AD, al-Khwarizmi agreed with Ptolemy; 3.1416 it was, until James Gregory begged to differ in the late 1600s.π is an important number and being able to figure out what it is has played a significant factor in the advance of technology. While all of these numbers are pretty much the same (to varying degrees of precision), isn't it absurd that someone figured out π by dropping 34,000 pins on a grid? We take π for granted today; we don't have to go about finding the value of π, we just use it in our calculations.
In Quicksilver, Neal Stephenson portrays several experiments performed by some of the greatest minds in history, and many of the things they did struck me as especially unglamorous. Most would point to the dog and bellows scene as a prime example of how unglamorous the unprecedented age of discovery recounted in the book really was (and they'd be right), but I'll choose something more mundane (page 141 in my edition):
"Help me measure out three hundred feet of thread," Hooke said, no longer amused.And, of course, the experiment was a failure. Why? The scale was not precise enough! The book is filled with similar such experiments, some successful, some not.
Another example is telephones. Pick one up, enter a few numbers on the keypad and voila! you're talking to someone halfway across the world. Pretty neat, right? But how does that system work, behind the scenes? Take a look at the photo on the right. This is a typical intersection in a typical American city, and it is absolutely absurd. Look at all those wires! Intersections like that are all over the world, which is the part of the reason I can pick up my phone and talk to someone so far away. One other part of the reason I can do that is that almost everyone has a phone. And yet, this system is perceived to be elegant.
Of course, the telephone system has grown over the years, and what we have now is elegant compared to what we used to have:
The engineers who collectively designed the beginnings of the modern phone system in the 1940's and 1950's only had mechanical technologies to work with. Vacuum tubes were too expensive and too unreliable to use in large numbers, so pretty much everything had to be done with physical switches. Their solution to the problem of "direct dial" with the old rotary phones was quite clever, actually, but by modern standards was also terribly crude; it was big, it was loud, it was expensive and used a lot of power and worst of all it didn't really scale well. (A crossbar is an N� solution.) ... The reason the phone system handles the modern load is that the modern telephone switch bears no resemblance whatever to those of 1950's. Except for things like hard disks, they contain no moving parts, because they're implemented entirely in digital electronics.So we've managed to get rid of all the moving parts and make things run more smoothly and reliably, but isn't it still an absurd system? It is, but we don't really stop to think about it. Why? Because we've hidden the vast and complex backend of the phone system behind innocuous looking telephone numbers. All we need to know to use a telephone is how to operate it (i.e. how to punch in numbers) and what number we want to call. Wenham explains, in a different essay:
The numbers seem pretty simple in design, having an area code, exchange code and four digit number. The area code for Manhattan is 212, Queens is 718, Nassau County is 516, Suffolk County is 631 and so-on. Now let's pretend it's my job to build the phone routing system for Emergency 911 service in the New York City area, and I have to route incoming calls to the correct police department. At first it seems like I could use the area and exchange codes to figure out where someone's coming from, but there's a problem with that: cell phone owners can buy a phone in Manhattan and get a 212 number, and yet use it in Queens. If someone uses their cell phone to report an accident in Queens, then the Manhattan police department will waste precious time transferring the call.He also mentions cell phones, which are somewhat less absurd than plain old telephones, but when you think about it, all we've done with cell phones is abstract the telephone lines. We're still connecting to a cell tower (which need to be placed with high frequency throughout the world) and from there, a call is often routed through the plain old telephone system. If we could see the RF layer in action, we'd be astounded; it would make the telephone wires look organized and downright pleasant by comparison.
The act of hiding the physical nature of a system behind an abstraction is very common, but it turns out that all major abstractions are leaky. But all leaks in an abstraction, to some degree, are useful.
One of the most glamorous technological advances of the past 50 years was the advent of space travel. Thinking of the heavens is indeed an awe-inspiring and humbling experience, to be sure, but when you start breaking things down to the point where we can put a man in space, things get very dicey indeed. When it comes to space travel, there is no more glamorous a person than the astronaut, but again, how does one become an astronaut? The need to pour through and memorize giant telephone-sized books filled with technical specifications and detailed schematics. Hardly a glamorous proposition.
Steven Den Beste recently wrote a series of articles concerning the critical characteristics of space warships, and it is fascinating reading, but one of the things that struck me about the whole concept was just how unglamorous space battles would be. It sounds like a battle using the weapons and defenses described would be punctuated by long periods of waiting followed by a short burst of activity in which one side was completely disabled. This is, perhaps, the reason so many science fiction movies and books seem to flaunt the rules of physics. As a side note, I think a spectacular film could be made while still obeying the rules of physics, but that is only because we're so used to the absurd physics defying space battles.
None of this is to say that technological advances aren't worthwhile or that those who discover new and exciting concepts are somehow not impressive. If anything, I'm more impressed at what we've achieved over the years. And yet, since we take these advances for granted, we marginalize the effort that went into their discovery. This is due in part to the necessary abstractions we make to implement various systems. But when abstractions hide the crude underpinnings of technology, we see that technology and its creation as glamorous, thus bestowing honors upon those who make the discovery (perhaps for the wrong reasons). It's an almost paradoxal cycle. Perhaps because of this, we expect newer discoveries and innovations to somehow be less crude, but we must realize that all of our discoveries are inherently crude.
And while we've discovered a lot, it is still crude and could use improvements. Some technologies have stayed the same for thousands of years. Look at toilet paper. For all of our wondrous technological advances, we're still wiping our ass with a piece of paper. The Japanese have the most advanced toilets in the world, but they've still not figured out a way to bypass the simple toilet paper (or, at least, abstract the process). We've got our work cut out for us. Luckily, we're willing to go to absurd lengths to achieve our goals.
Posted by Mark on May 02, 2004 at 09:47 PM .: link :.
Wednesday, April 21, 2004
Steven Den Beste has a fascinating post about the critical characteristics of space warships. He approaches the question from a realistic angle, mostly relying on current technology, only extrapolating reasonable advances. He rules out the sci-fi stuff ("hyperspace," "subspace," "leap cannon," etc...) right from the start, and a few things struck me while reading it.
This post will deal with one of the things that he has (reasonably) decided not to include in his discussion: energy shields. I'm doing this mostly as a thought exercise. I've found that writing about a subject helps me learn about it, and this is something I'd like to know more about. That said, I don't know how conclusive this post will be. As it stands now, the post will raise more questions than it answers. Another post will deal with a subject I've been thinking about a lot lately, which is how unglamorous technological advance can be, and how space battles might be a good example. It sounds like a battle using the weapons and defenses described would be punctuated by long periods of waiting followed by a short burst of activity in which one side was completely disabled. There is a reason why science fiction films flaunt the rules of physics. But that is another topic for another post.
Once he discards the useless physics-defying science fiction inventions, Den Beste goes on to list a number of possible weapons, occasionally mentioning defense systems. Given that I'll be focusing on defense systems, it's worth noting the types of attacks that will need to be repelled. Here is a basic list of weapons for use in a space battle:
Plasma is basically a collection of molecules, atoms, electrons and positively charged ions, and it makes up 99% of the known universe. Hot plasma is present in the sun - at high temperatures hydrogen nuclei can fuse into heavier nuclei despite a mutual electric repulsion. When these particles collide in the sun, they aquire enough energy to fuse, and release a tremendous amount of energy. Unfortunately, hot plasmas are not of much use for defensive purposes, as the temperatures are too high, and would be destructive.
Colder plasmas, however, would do the trick. A plasma's charged particles interact constantly, creating localized attractions or repulsions. An external energy attack, from weapons such as lasers, high powered microwave bursts, or particle beams, would theoretically be caught up in the plasma's complex electromagnetic fields and dissapated or deflected. If the plasma could be made sufficiently dense, it could even deflect missiles and other projectiles. The process of absorbing and dissapating energy could also go a long way into defeating radar... but as Den Beste noted, IR detectors would be the primary sensor used in space, so this sort of "cloaking" ability would be of limited use.
Interestingly, such a cold plasma shield could also be applied to projectiles such as missiles, shielding them from the defensive measures Den Beste thinks would be used against them.
Unfortunately, cold plasma requires a lot of energy to produce. And since I can't seem to find an adequate explanation of what cold plasma really is or, rather, how it is produced, the use of cold plasma brings up a number of questions. My primary concern has to do with the energy needed to produce cold plasma, and how the excess heat would be dissapated. Den Beste notes:
Warships will be hot and will have to shed a lot of heat in order to avoid destroying themselves.Now, you've created a cold plasma force field around your spacecraft that could theoretically deflect electromagnetic attacks from weapons like lasers, masers, and particle beams, but what about the heat produced on your own ship? How would heat interact with the cold plasma? Would the plasma absorb the heat? If it did, wouldn't you saturate the plasma shield (after all, you'd be producing an awful lot of heat even without the massive amount of energy needed to set up the plasma field, and when you add that, couldn't you overload it)? If you surrounded your ship, how would the heat escape? Exposing the radiator would defeat the purpose of having a shield in the first place, as the radiator would be one of the primary targets.
Well, perhaps I've figured out why Den Beste ruled out energy shields in the first place. Sorry if this seemed like a waste of time, but I found it at least somewhat interesting, even if it wasn't conclusive. And I've also found a new respect for the type of theoretical discussions Den Beste is so good at... Stay tuned for a more general (and hopefully more interesting) discussion on the unglamorous march of technology.
Update: Buckethead has an excellent series of 4 posts on War in Space (one, two, three, four). I am clearly outclassed. One of these days I'll crank out that post about the unglamorous side of technology advancement, but for now, I'll leave the technical aspects in the capable hands of Den Beste and Buckethead...
Posted by Mark on April 21, 2004 at 08:21 PM .: link :.
Sunday, March 28, 2004
USA Today has a fascinating look inside an interesting CIA initiative:
...In-Q-Tel is the venture-capital arm of the CIA.The program has apparently been very successful, and will most likely be renewed. The DoD has expressed interest in duplicating the model for their own purposes.
Despite it's name being inspired by James Bond's Q, In-Q-Tel doesn't seem to be investing in high-tech weaponry or spy gadgets. Their focus seems to run more towards finding, sorting and communicating data. Products range from an application that can translate documents from Arabic into English, to an advanced Google-like search engine, to weblogging software(!). Public/private partnerships aren't very common in the US, but there are some exceptions, and in this case, it looks like it was a good idea.
...Tenet explained that the CIA and government labs had always been on the leading edge of tech. But the Internet boom poured so much money into tech start-ups, the start-ups leapt ahead of the CIA. And scientists and technologists who had innovative ideas went off to be entrepreneurs and get rich ? they didn't want government salaries at the CIA.Of course, the public/private and somewhat low profile nature of the program makes for some strange rumors:
In-Q-Tel has become known for being thorough yet furtive. These days, when a young company is making a presentation at an event, an unknown man or woman might come in, listen intently, then disappear. Such is In-Q-Tel's mystique that entrepreneurs often believe those are In-Q-Tel scouts even when they're not.As I said before, the program has been successful (though success is measured in more than just money here - they're actually finding useful applications, and that's what the real goal is) but the CIA is characteristically cautious:
"It has far exceeded anything I could've hoped for when we had that first meeting," Augustine says. But he adds a note of caution, apropos for the CIA, which had been stuck for too long in old ways of finding new technology. "No idea is good forever," Augustine says. "We'll have to see how it holds up with time."Update: Charles Hudson is a blogger who works for In-Q-Tel. Interesting.
Posted by Mark on March 28, 2004 at 04:58 PM .: link :.
Sunday, March 14, 2004
My New Toy
Pictured to the right is my new toy, a Pioneer DVR 106 DVD±RW Burner. I wanted to get a DVD drive for the computer so that I could do screen grabs for film reviews and scene analysis (for instance, it would help a great deal to have screenshots on my scene analysis of Rear Window), but when I looked into it, I found out that DVR drives were shockingly inexpensive. In fact, it cost approximately $100 less than my CD Burner (which I bought several years ago, when they hadn't yet become commonplace). For the record, a simple DVD ROM drive is also shockingly inexpensive, but the added functionality in a DVR drive seemed worth the price.
Posted by Mark on March 14, 2004 at 08:16 PM .: link :.
Sunday, February 15, 2004
Deterministic Chaos and the Simulated Universe
After several months of absence, Chris Wenham has returned with a new essay entitled 2 + 2. In it, he explores a common idea:
Many have speculated that you could simulate a working universe inside a computer. Maybe it wouldn't be exactly the same as ours, and maybe it wouldn't even be as complex, either, but it would have matter and energy and time would elapse so things could happen to them. In fact, tiny little universes are simulated on computers all the time, for both scientific work and for playing games in. Each one obeys simplified laws of physics the programmers have spelled out for them, with some less simplified than others.As always, the essay is well done and thought provoking, exploring the idea from several mathematical angles. But it makes the assumption that the universe is both deterministic and infinitely quantifiable. I am certainly no expert on chaos theory, but it seems to me that it bears an importance on this subject.
A system is said to be deterministic if its future states are strictly dependant on current conditions. Historically, it was thought that all processes occurring in the universe were deterministic, and that if we knew enough about the rules governing the behavior of the universe and had accurate measurements about its current state we could predict what would happen in the future. Naturally, this theory has proven very useful in modeling real world events such as flying objects or the wax and wane of the tides, but there have always been systems which were more difficult to predict. Weather, for instance, is notoriously tricky to predict. It was always thought that these difficulties stemmed from an incomplete knowledge of how the system works or inaccurate measurement techniques.
In his essay, Wenham discusses how a meteorologist named Edward Lorenz stumbled upon the essence of what is referred to as chaos (or nonlinear dynamics, as it is often called):
Lorenz's simulation worked by processing some numbers to get a result, and then processing the result to get the next result, thus predicting the weather two moments of time into the future. Let's call them result1, which was fed back into the simulation to get result2. result3 could then be figured out by plugging result2 into the simulation and running it again. The computer was storing resultn to six decimal places internally, but only printing them out to three. When it was time to calculate result3 the following day, he re-entered result2, but only to three decimal places, and it was this that led to the discovery of something profound.This phenomenon is called "sensitive dependence on initial conditions." For the systems in which we could successfully make good predictions (such as the path of a flying object), only a reasonable approximation of the initial state is necessary to make a reasonably accurate prediction. Sensitive dependence of a reasonable approximation of the initial state, however, yields unreasonable predictions. In a system exhibiting sensitive dependence, reasonable approximations of the initial state do not provide reasonable approximations of the future state.
So here comes the important part: For a chaotic system such as weather, in order to make useful long term predictions, you need measurements of initial conditions with infinite accuracy. What this means is that even a deterministic system, which in theory can be modeled by mathematical equations, can generate behavior which seems random and unpredictable. This manifests itself in nature all the time. Weather is the typical example, but there is also evidence that the human brain is also governed by deterministic chaos. Indeed, our brain's ability to generate seemingly unpredictable behavior is an important component of both survival and creativity.
So my question is, if it is not possible to quantify the initial conditions of a chaotic system with infinite accuracy, is that system really deterministic? In a sense, yes, even though it is impossible to calculate it:
Michaelangelo claimed the statue was already in the block of stone, and he just had to chip away the unnecessary parts. And in a literal sense, an infinite number of universes of all types and states should exist in thin air, indifferent to whether or not we discover the rules that exactly reveal their outcome. Our own universe could even be the numerical result of a mathematical equation that nobody has bothered to sit down and solve yet.The answer might be there, whether we can calculate it or not, but even if it is, can we really do anything useful with it? In the movie Pi, a mathematician stumbles upon an enigmatic 216 digit number which is supposedly the representation of the infinite, the true name of God, and thus holds the key to deterministic chaos. But it's just a number, and no one really knows what to do with it, not even the mathematician who discovered it (though he could make accurate predictions on for the stock market, though he could not understand why and it came at a price). In the end, it drove him mad. I don't pretend to have any answers here, but I think the makers of Pi got it right.
Posted by Mark on February 15, 2004 at 02:33 PM .: link :.
Wednesday, January 28, 2004
Established in 1960, JASON is an independent scientific advisory group that provides consulting services to the U.S. government on matters of defense science and technology. Most of it's work is the product of an annual summer study and they have done work for the DOD (including DARPA), FBI, CIA and DOE. FAS recently collected and published several recent unclassified JASON studies on their website. They cover a wide area of subjects, ranging from quantum computing to nanotechnology to nuclear weapon maintenance. There is way too much material there to summarize, so here are just a few that cought my eye:
Posted by Mark on January 28, 2004 at 08:13 PM .: link :.
Wednesday, January 21, 2004
NASA, Commercialization, and Agility
The Laughing Wolf comments on the "new" space initiative, paying particular attention to commercial interest in space... and the lack of any mention of commercialization in the new plan. He reads something into this which goes along with my thoughts on the institutional agility that will be necessary to make it to the moon and beyond.
You know, the President is not nearly as stupid as his critics try to portray him to be. In fact, he has been pretty shrewd and smart on many major issues. He may not be the best spoken person around, but he is not stupid. Do you think that he may have had some method to his madness here? For what if private industry does create and provide launch services? What if they do send probes on to the moon? Do you think that maybe NASA might, by dint of budget and language, be encouraged to make use of it? It is an intriguing possibility, since the actual language and such is not yet fully available, or perhaps even fully worked out.In my post on this subject, I didn't write about what the next big advance in space travel would be or who would create it, only that it would happen and that NASA would need to be agile enough to react to and exploit it. I noticed that the proposal didn't make any mention of commercial efforts, but I didn't pick up on the idea that the absense of such points was something of a challenge to the private sector.
Also, for more on the space effort, Jay Manifold has been blogging up a storm over at A Voyage To Arcturus. There is too much good stuff there to summarize, but if you're interested in this subject, check it out. Alright, one interesting thing I saw there was this conceptual illustration of a modular Crewed Exploration Vehicle. Of course as both Jay and the Laughing Wolf note, the CEV is meant to accompish many and varied goals, which means that while it may be versitile, it won't do any of its many tasks very well... but it is interesting nonetheless.
Posted by Mark on January 21, 2004 at 06:08 PM .: link :.
Sunday, January 18, 2004
To the Moon!
President Bush has laid out his vision for space exploration. Reaction has mostly been lukewarm. Naturally, there are opponents and proponents, but in my mind it is a good start. That we've changed focus to include long term manned missions on the Moon and a mission to Mars is a bold enough move for now. What is difficult is that this is a program that will span several decades... and several administrations. There will be competition and distractions. To send someone to Mars on the schedule Bush has set requires a consistent will among the American electorate as well. However, given the technology currently available, it might prove to be a wise move.
A few months ago, in writing about the death of the Galileo probe, I examined the future of manned space flight and drew a historical analogy with the pyramids. I wrote:
Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.We should, and I'm glad we're orienting ourselves in this direction. Bush's plan appeals to me because of it's pragmatism. It doesn't seek to simply fly to Mars, it seeks to leverage the Moon first. We've already been to the Moon, but it still holds much value as a destination in itself as well as a testing ground and possibly even a base from which to launch or at least support our Mars mission. Some, however, see the financial side of things a little too pragmatic:
In its financial aspects, the Bush plan also is pragmatic -- indeed, too much so. The president's proposal would increase NASA's budget very modestly in the near term, pushing more expensive tasks into the future. This approach may avoid an immediate political backlash. But it also limits the prospects for near-term technological progress. Moreover, it gives little assurance that the moon-Mars program will survive the longer haul, amid changing administrations, economic fluctuations, and competition from voracious entitlement programs.There's that problem of keeping everyone interested and happy in the long run again, but I'm not so sure we should be too worried... yet. Wretchard draws an important distinction, we've laid out a plan to voyage to Mars - not a plan to develop the technology to do so. Efforts will be proceeding on the basis of current technology, but as Wretchard also notes in a different post, current technology may be unsuitable for the task:
Current launch costs are on the order of $8,000/lb, a number that will have to be reduced by a factor of ten for the habitation of the moon, the establishment of La Grange transfer stations or flights to Mars to be feasible. This will require technology, and perhaps even basic physics that does not even exist. Simply building bigger versions of the Saturn V will not work. That would be "like trying to upgrade Columbus?s Nina, Pinta, and Santa Maria with wings to speed up the Atlantic crossing time. A jet airliner is not a better sailing ship. It is a different thing entirely." The dream of settling Mars must await an unforseen development.Naturally, the unforseen development is notoriously tricky, and while we must pursue alternate forms of propulsion, it would be unwise to hold off on the voyage until this development occurs. We must strike a delicate balance between the concentration on the goal and the means to achieve that goal. As Wretchard notes, this is largely dependant on timing. What is also important here is that we are able to recognize this development when it happens and that we leave our program agile enough to react effectively to this development.
Recognizing this development will prove interesting. At what point does a technology become mature enough to use for something this important? This may be relatively straightforward, but it is possible that we could jump the gun and proceed too early (or, conversely, wait too long). Once recognized, we need to be agile, by which I mean that we must develop the capacity to seamlessly adapt the current program to exploit this new development. This will prove challenging, and will no doubt require a massive increase in funding, as it will also require a certain amount of institutional agility - moving people and resources to where we need them, when we need them. Once we recognize our opportunity, we must pounce without hesitation.
It is a bold and challenging, yet judiciously pragmatic, vision that Bush has laid out, but this is only the first step. The truly important challenges are still a few years off. What is important is that we recognize and exploit any technological advances on our way to Mars, and we can only do so if we are agile enough to effectively react. Exploration of the frontiers is a part of my country's identity, and it is nice to see us proceeding along these lines again. Like the Egyptians so long ago, this mammoth project may indeed inspire a unity amongst our people. In these troubled times, that would be a welcome development. Though Europe, Japan, and China have also shown interest in such an endeavor, I, along with James Lileks, like the idea of an American being the first man on Mars:
When I think of an American astronaut on Mars, I can't imagine a face for the event. I can tell you who staffed the Apollo program, because they were drawn from a specific stratum of American life. But things have changed. Who knows who we'd send to Mars? Black pilot? White astrophysicist? A navigator whose parents came over from India in 1972? Asian female doctor? If we all saw a bulky person bounce out of the landing craft and plant the flag, we'd see that wide blank mirrored visor. Sex or creed or skin hue - we'd have no idea.Indeed.
Update 1.21.04: More here.
Posted by Mark on January 18, 2004 at 05:16 PM .: link :.
Tuesday, October 07, 2003
A Compendium of DARPA Programs
The Defense Advanced Research Projects Agency (DARPA) has been widely criticized for several of its more controversial programs, including the now defunct Terrorism Information Awareness program (rightly so) and a Futures Market used to predict terror (perhaps wrongly so), but (as Steven Aftergood has noted) it has not received the credit to which it is arguably entitled for conducting those programs in an unclassified form, in which they can be freely debated, criticized and attacked.
DARPA has recently published a complete descriptive summary of all of its (unclassified) programs, and some of it reads like a science fiction author's wishlist. It's a fascinating collection of programs and it makes for absorbing reading.
I've read a good portion of the report and while I find it impossible to provide a summary (it is, after all, a summary in itself), though I was particularly enthralled by how DARPA is attempting to exploit the intersection of biology, information technology, and physical sciences. For instance:
The Brain Machine Interface Program will create new technologies for augmenting human performance through the ability to noninvasively access codes in the brain in real time and integrate them into peripheral device or system operations.Essentially this means that they are attempting to create an interface in which a brain accepts and controls a mechanical device as a natural part of it's body. The applications for this are near limitless and, though designed for military applications (of the type you're likely to see in science fiction novels), this technology would be extremely valuable for giving paralysis or amputation patients the ability to control a motorized wheelchair or a prosthetic limb as an extension of their body.
As you might expect, many of the projects work along similar lines and could theoretically provide supporting characteristics to one another. For instance, it seems to me that a brain machine interface would be particularly useful if paired with the Exoskeletons for Human Performance Augmentation program, again creating something right out of science fiction. It also raises some rather interesting questions about our place in evolution, and whether making the transition to a cyborg-like species is inevitiable. I remember Arthur C. Clarke forwarding the idea that as technology progressed far beyond our capabilities, human beings would find a way to transfer their consciousness to a mechanical (or, given the amount of biological engineering going on, let's just say constructed) being, as these machines would be more efficient than the human body. Of course, that is quite far off, but it is interesting to ponder (and Clarke even went further, postulating that we would only spend a short time in our "robot" form and even transcend our physical form...)
Again, I found the biological technologies (as well as many of the nanotechnologies) that are being explored to be the most interesting buch. One such program is attempting to actively collect information from insect populations to map areas for biohazards, another is set to develop biomolecular motors (nanomachines that convert chemical energy into mechanical work at a very high rate of efficiency). There are a lot of programs that utilize BioMagnetics and nanotechnology to attain a better monitoring capability for the human body.
Some of these projects or ideas have been around for a while and many of them are still in preliminary phases, but it is still interesting to see the breadth of ideas DARPA is exploring...
Note: Some of the information in the report is out of date, notably with respect to the "Total Information Awareness" project which was later renamed "Terrorism Information Awareness" and is now defunct.
Posted by Mark on October 07, 2003 at 10:59 PM .: link :.
Monday, September 08, 2003
My God! It's full of stars!
What Galileo Saw by Michael Benson : A great New Yorker article on the remarkable success of the Galileo probe. James Grimmelmann provides some fantastic commentary:
Launched fifteen years ago with technology that was a decade out of date at the time, Galileo discovered the first extraterrestrial ocean, holds the record for most flybys of planets and moons, pointed out a dual star system, and told us about nine more moons of Jupiter.And the brilliance doesn't end there:
As if that wasn't enough hacker brilliance, design changes in the wake of the Challenger explosion completely ruled out the original idea of just sending Galileo out to Mars and slingshotting towards Jupiter. Instead, two Ed Harris characters at NASA figured out a triple bank shot -- a Venus flyby, followed by two Earth flybys two years apart -- to get it out to Jupiter. NASA has come in for an awful lot of criticism lately, but there are still some things they do amazingly well.Score another one for NASA (while you're at it, give Grimmelmann a few points for the Ed Harris reference). Who says NASA can't do anything right anymore? Grimmelmann observes:
The Galileo story points out, I think, that the problem is not that NASA is messed-up, but that manned space flight is messed-up.Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.
Such projects are not without their historical equivalents. There are all sorts of theories explaining why the ancient Egyptian pyramids were built, but none are as persuasive as the idea that they were built to unify Egypt's people and cultures. At the time, almost everything was being done on a local scale. With the possible exception of various irrigation efforts that linked together several small towns, there existed no project that would encompass the whole of Egypt. Yes, an insane amount of resources were expended, but the product was truly awe-inspiring, and still is today.
Those who built the pyramids were not slaves, as is commonly thought. They were mostly farmers from the tribes along the River Nile. They depended on the yearly cycle of flooding of the Nile to enrich their fields, and during the months that that their fields were flooded, they were employed to build pyramids and temples. Why would a common farmer give his time and labor to pyramid construction? There were religious reasons, of course, and patriotic reasons as well... but there was something more. Building the pyramids created a certain sense of pride and community that had not existed before. Markings on pyramid casing stones describe those who built the pyramids. Tally marks and names of "gangs" (groups of workers) indicate a sense of pride in their workmanship and respect between workers. The camaraderie that resulted from working together on such a monumental project united tribes that once fought each other. Furthermore, the building of such an immense structure implied an intense concentration of people in a single area. This drove a need for large-scale food-storage among other social constructs. The Egyptian society that emerged from the Pyramid Age was much different from the one that preceded it (some claim that this was the emergance of the state as we now know it.)
"What mattered was not the pyramid - it was the construction of the pyramid." If the pyramid was a machine for social progress, so too can the Space program be a catalyst for our own society.
Much like the pyramids, space travel is a testament to what the human race is capable of. Sure it allows us to do research we couldn't normally do, and we can launch satellites and space-based telescopes from the shuttle (much like pyramid workers were motivated by religion and a sense of duty to their Pharaoh), but the space program also serves to do much more. Look at the Columbia crew - men, women, white, black, Indian, Israeli - working together in a courageous endeavor, doing research for the benefit of mankind, traveling somewhere where few humans have been. It brings people together in a way few endeavors can, and it inspires the young and old alike. Human beings have always dared to "boldly go where no man has gone before." Where would we be without the courageous exploration of the past five hundred years? We should continue to celebrate this most noble of human spirits, should we not?
In the mean time, Galileo is nearing its end. On September 21st, around 3 p.m. EST, Galileo will be vaporized as it plummets toward Jupiter's atmosphere, sending back whatever data it still can. This planned destruction is exactly what has been planned for Galileo; the answer to an intriguing ethical dilemma.
In 1996, Galileo conducted the first of eight close flybys of Europa, producing breathtaking pictures of its surface, which suggested that the moon has an immense ocean hidden beneath its frozen crust. These images have led to vociferous scientific debate about the prospects for life there; as a result, NASA officials decided that it was necessary to avoid the possibility of seeding Europa with alien life-forms.I had never really given thought to the idea that one of our space probes could "infect" another planet with our "alien" life-forms, though it does make perfect sense. Reaction to the decision among those who worked on Galileo is mixed, most recognizing the rationale, but not wanting to let go anyway (understandable, I guess)...
For more on the pyramids, check out this paper by Marcell Graeff. The information he referenced that I used in this article came primarily from Kurt Mendelssohn's book The Riddle of the Pyramids.
Update 9.25.03 - Steven Den Beste has posted an excellent piece on the Galileo mission and more...
Posted by Mark on September 08, 2003 at 11:06 PM .: link :.
Sunday, May 25, 2003
Security & Technology
The other day, I was looking around for some new information on Quicksilver (Neal Stephenson's new novel, a follow up to Cryptonomicon) and I came across Stephenson's web page. I like everything about that page, from the low-tech simplicity of its design, to the pleading tone of the subject matter (the "continuous partial attention" bit always gets me). At one point, he gives a summary of a talk he gave in Toronto a few years ago:
Basically I think that security measures of a purely technological nature, such as guns and crypto, are of real value, but that the great bulk of our security, at least in modern industrialized nations, derives from intangible factors having to do with the social fabric, which are poorly understood by just about everyone. If that is true, then those who wish to use the Internet as a tool for enhancing security, freedom, and other good things might wish to turn their efforts away from purely technical fixes and try to develop some understanding of just what the social fabric is, how it works, and how the Internet could enhance it. However this may conflict with the (absolutely reasonable and understandable) desire for privacy.And that quote got me to thinking about technolology and security, and how technology never really replaces human beings, it just makes certain tasks easier, quicker, and more efficient. There was a lot of talk about this sort of thing around the early 90s, when certain security experts were promoting the use of strong cryptography and digital agents that would choose what products we would buy and spend our money for us.
As it turns out, most of those security experts seem to be changing their mind. There are several reasons for this, chief among them fallibility and, quite frankly, a lack of demand. It is impossible to build an infallible system (at least, it's impossible to recognize that you have built such a system), but even if you had accomplished such a feat, what good would it be? A perfectly secure system is also a perfectly useless system. Besides that, you have human ignorance to contend with. How many of you actually encrypt your email? It sounds odd, but most people don't even notice the little yellow lock that comes up in their browser when they are using a secure site.
Applying this to our military, there are some who advocate technology (specifically airpower) as a replacement for the grunt. The recent war in Iraq stands in stark contrast to these arguments, despite the fact that the civilian planners overruled the military's request for additional ground forces. In fact, Rumsfeld and his civilian advisors had wanted to send significantly fewer ground forces, because they believed that airpower could do virtually everything by itself. The only reason there were as many as there were was because General Franks fought long and hard for increased ground forces (being a good soldier, you never heard him complain, but I suspect there will come a time when you hear about this sort of thing in his memoirs).
None of which is to say that airpower or technology are not necessary, nor do I think that ground forces alone can win a modern war. The major lesson of this war is that we need to have balanced forces in order to respond with flexibility and depth to the varied and changing threats our country faces. Technology plays a large part in this, as it makes our forces more effective and more likely to succeed. But, to paraphrase a common argument, we need to keep in mind that weapons don't fight wars, soldiers do. While technology we used provided us with a great deal of security, its also true that the social fabric of our armed forces were undeniably important in the victory.
One thing Stephenson points to is an excerpt from a Sherlock Holmes novel in which Holmes argues:
...the lowest and vilest alleys in London do not present a more dreadful record of sin than does the smiling and beautiful country-side...The pressure of public opinion can do in the town what the law cannot accomplish...But look at these lonely houses, each in its own fields, filled for the most part with poor ignorant folk who know little of the law. Think of the deeds of hellish cruelty, the hidden wickedness which may go on, year in, year out, in such places, and none the wiser.Once again, the war in Iraq provides us with a great example. Embedding reporters in our units was a controversial move, and there are several reasons the decision could have been made. One reason may very well have been that having reporters around while we fought the war may have made our troops behave better than they would have otherwise. So when we watch the reports on TV, all we see are the professional, honorable soldiers who bravely fought an enemy which was fighting dirty (because embedding reporters revealed that as well).
Communications technology made embedding reporters possible, but it was the complex social interactions that really made it work (well, to our benefit at least). We don't derive security straight from technology, we use it to bolster our already existing social constructs, and the further our technology progresses, the easier and more efficient security becomes.
Update 6.6.03 - Tacitus discusses some similar issues...
Posted by Mark on May 25, 2003 at 02:03 PM .: link :.
Sunday, April 06, 2003
Warp Drive Underwater by Steven Ashley : A long time ago, I wrote about Supercavitation here, but apparently missed this article, which covers the subject much more thouroughly. It focuses mostly on the military applications of this technology (though it is applicable to ocean farming and underwater exploration) and it contains a lot of detail on the most famous example of the technology, Russia's VA-111 Shkval (Squall) rocket-torpedo. Some of the details are speculative, but they give a good explaination of the technology as well as some of the main applications, which include high-speed torpedoes, underwater machine-guns armed with supercavitating bullets to help clear mines, among other applications. Underwater mines are a serious nuisance, and an application such as the US RAMICS program would be a huge help... [via Punchstack]
Posted by Mark on April 06, 2003 at 07:13 PM .: link :.
Monday, December 17, 2001
New Medium, Same Complaints
DVD Menu Design: The Failures of Web Design Recreated Yet Again by Dr. Donald A. Norman (of Nielsen Norman Group fame) : The first time I saw this, I didn't even realize that it wasn't written by Jacob Nielson. I guess they're partners for a reason - Norman writes much the same way that Nielson does, and with the same interface philosophy. This time they're applying the same old boring usability guidelines to DVDs. But just because they are the same doesn't mean they are useless - DVD menus are getting to be ridiculously and unnecessarily complex. There is something to be said for the artistic merit of the menu scheme, but most of the time it ends up being obnoxious (especially upon repeated viewings of the film). Its surprising that most DVDs haven't learned from the mistakes of other mediums. In fact, I'm going to take this opportunity to bitch about DVDs - their interfaces and their content.
Posted by Mark on December 17, 2001 at 02:39 PM .: link :.
Tuesday, October 09, 2001
The Fifty Nine Story Crisis
In 1978, William J. LeMessurier, one of the nation's leading structural engineers, received a phone call from an engineering student in New Jersey. The young man was tasked with writing a paper about the unique design of the Citicorp tower in New York. The building's dramatic design was necessitated by the placement of a church. Rather than tear down the church, the designers, Hugh Stubbins and Bill LeMessurier, set their fifty-nine-story tower on four massive, nine-story-high stilts, and positioned them at the center of each side rather than at each corner. This daring scheme allowed the designers to cantilever the building's four corners, allowing room for the church beneath the northwest side.
Thanks to the prodding of the student (whose name was lost in the swirl of subsequent events), LeMessurier discovered a subtle conceptual error in the design of the building's wind braces; they were unusually sensitive to certain kinds of winds known as quartering winds. This alone wasn't cause for worry, as the wind braces would absorb the extra load under normal circumstances. But the circumstances were not normal. Apparently, there had been a crucial change during their manufacture (the braces were fastened together with bolts instead of welds, as welds are generally considered to be stronger than necessary and overly expensive; furthermore the contractors had interpreted the New York building code in such a way as to exempt many of the tower's diagonal braces from loadbearing calculations, so they had used far too few bolts.) which multiplied the strain produced by quartering winds. Statistically, the possibility of a storm severe enough to tear the joint apart was once every sixteen years (what meteorologists call a sixteen year storm). This was alarmingly frequent. To further complicate matters, hurricane season was fast approaching.
The potential for a complete catastrophic failure was there, and because the building was located in Manhattan, the danger applied to nearly the entire city. The fall of the Citicorp building would likely cause a domino effect, wreaking a devestating toll of destruction in New York.
The story of this oversight, though amazing, is dwarfed by the series of events that led to the building's eventual structural integrity. To avert disaster, LeMessurier quickly and bravely blew the whistle - on himself. LeMessurier and other experts immediately drew up a plan in which workers would reinforce the joints by welding heavy steel plates over them.
Astonishingly, just after Citicorp issued a bland and uninformative press release, all of the major newspapers in New York went on strike. This fortuitous turn of events allowed Citicorp to save face and avoid any potential embarrassment. Construction began immediately, with builders and welders working from 5 p.m. until 4 a.m. to apply the steel "band-aids" to the ailing joints. They build plywood boxes around the joints, so as not to disturb the tenants, who remained largely oblivious to the seriousness of the problem.
Instead of lawsuits and public panic, the Citicorp crisis was met with efficient teamwork and a swift solution. In the end, LeMessurier's reputation was enhanced for his courageous honesty, and the story of Citicorp's building is now a textbook example of how to respond to a high-profile, potentially disastrous problem.
Most of this information came from a New Yorker article by Joe Morgenstern (published May 29, 1995) . It's a fascinating story, and I found myself thinking about it during the tragedies of September 11. What if those towers had toppled over in Manhattan? Fortunately, the WTC towers were extremely well designed - they didn't even noticeably rock when the planes hit - and when they did come down, they collapsed in on themselves. They would still be standing today too, if it wasn't for the intense heat that weakened the steel supports.
Posted by Mark on October 09, 2001 at 08:04 AM .: link :.
Thursday, September 27, 2001
Do minds play dice?
Unpredictability may be built into our brains. Neurophysiologists have found that clusters of nerve cells respond to the same stimulus differently each time, as randomly as heads or tails. The implications of this are far reaching, but I can't say I'm all that suprised. It makes evolutionary sense, in that you can evade (or even launch) attacks better by jumping from side to side. It makes sociological sense, in that a person's environment and upbringing do not necessarily dictate how they will act in the future (the most glaring examples are criminals; surely, their childhood must have been traumatic in order for them to commit such heinous acts). It even makes sense creatively, in that "randomness results in new kinds of behaviour and combinations of ideas, which are essential to the process of discovery".
Posted by Mark on September 27, 2001 at 06:56 PM .: link :.
Friday, June 22, 2001
Out of This World
Scientific American's Steve Mirsky shows a sense of humor in his story about the drop-off in UFO reports, giving several flippant explanations for the lack of sightings. Some claim that the aliens have completed their survey of Earth, but Mirsky believes the idea that they could complete their survey of Earth in a mere 50 years is both ludicrous and insulting and reasons that they must have run out of their alien government funding. My favourite explanation:
The aliens have finally perfected their cloaking technology. After all, evidence of absence is not absence of evidence (which is, of course, not evidence of absence). Just because we no longer see the aliens doesn't mean they're not there. Actually, they are there; the skies are lousy with them, they're coco-butting one another's bald, veined, throbbing, giant heads over the best orbits. But until they drop the cloak because they've got some beaming to do, we won't see them.I love the description "bald, veined, throbbing, giant heads". [via Follow Me Here]
Posted by Mark on June 22, 2001 at 01:16 PM .: link :.
Monday, May 21, 2001
Bending Time and Space with Light
Time twister: New Scientist reports that a professor of theoretical physics, Ronald Mallett, thinks he has found a practical way to make a time machine. Unlike other "time travel" solutions, such as wormholes, Mallett's solution relies heavily on light, a much more down to earth ingredient when compared to the "negative energy" matter used to open wormholes. Even though light doesn't have mass, it does have the quirky ability to bend space-time. Last year, Mallett published a paper describing how a circulating beam of laser light would create a vortex in space within its circle (Physics Letters A, vol 269, p 214).
To twist time into a loop, Mallett worked out that he would have to add a second light beam, circulating in the opposite direction. Then if you increase the intensity of the light enough, space and time swap roles: inside the circulating light beam, time runs round and round, while what to an outsider looks like time becomes like an ordinary dimension of space.The energy needed to twist time into a loop is enormous, but Mallet saw that the effect of circulating light depends on its velocity: the slower the light, the stronger the distortion in space-time. Light gains inertia as it is slowed down, so "Increasing its inertia increases its energy, and this increases the effect," Mallett says. There is still a lot of work to do to make this process a reality, and it probably won't happen for some "time", but the concept of plausible time travel in our time is intriguing, if only because of the moral and paradoxical issues it raises. The most famous paradox, of course, is going back in time to kill your grandparents, effectively negating your very own existence - but then you wouldn't be able to go back in time, would you? My favourite solution to said paradoxes is the Terminator or Bill and Ted version of time travel in which what you've done in the past has already influenced your present (and future). [via ArsTechnica]
Posted by Mark on May 21, 2001 at 09:35 AM .: link :.
Tuesday, May 01, 2001
The Earthquake Rose
Earthquakes are generally considered to be nasty, rather destructive events, but after a recent earthquake in Seattle, someone noticed some interesting patterns produced by a sand tracing pendulum (or Foucault Pendulum). The entire pattern resembles an eye (some say Poseidon's eye, for the god of the sea is also the god of earthquakes), but the pupil of said eye, the part of the pattern created by the earthquake, looks very much like a rose (and thus, it is called an Earthquake Rose). It is really quite pretty, and it's fascinating that "such a massive and very destructive release of energy can also contain such delicate artistry within its chaos." [found somewhere I don't remember the name of].
Posted by Mark on May 01, 2001 at 12:22 PM .: link :.
Monday, April 23, 2001
"Bionic Tower": A 300-story supertall building originally proposed for Hong Kong is now being considered by China's leaders for Shanghai. Its European designers describe it as a "vertical city". It would house 100,000 people and contain hotels, offices, cinemas and hospitals, effectively making it possible (not necissarily preferable) to live an entire life in one building. "Dwarfing Kuala Lumpur's twin Petronas Towers, the world's tallest buildings at 1,483ft high, it would be set in a gigantic, wheel-shaped base incorporating shopping malls and car parks." The designers have devised a root-like system of foundations that would descend 656ft, surrounded by an artificial lake to absorb vibrations caused by any earth tremors. Amazing stuff; it reminds me of the gigantic cities of The Caves of Steel, where cities spanned hundreds of miles and were ultimately self-contained (which caused a nasty fear of open spaces). Such an undertaking is an engineering nightmare. If attempted, it could quite possibly fail miserably - there are so many factors and pitfalls to be avoided, that there are bound to be some unforeseen consequences...[via /.]
If this venture is successful, however, it seems like it would be the world's first successful arcology. From the Arcologies egroup discussion:
Arcology is Paolo Soleri's concept of cities which embody the fusion of architecture with ecology. The arcology concept proposes a highly integrated and compact three-dimensional urban form that is the opposite of urban sprawl with its inherently wasteful consumption of land, energy, time and human resources. An arcology would need about two percent as much land as a typical city of similar population. Arcology eliminates the automobile from inside the city and reserves it for use outside the city. Walking would be the main form of transportation inside an arcology. The miniaturization of the city enables radical conservation of land, energy and resources. Arcology would rely as much as possible on the sun, the wind and other renewable energy so as to reduce pollution and dependence on fossil fuels. Arcology needs less energy per capita thus making recycling and the use of solar energy more feasible than in present cities.
Posted by Mark on April 23, 2001 at 09:42 AM .: link :.
Tuesday, April 17, 2001
Houston, we have a blue screen of death
Commander William Shepherd kept a mission log during the initial 136-day shift aboard the International Space Station. The log is fun reading, and you can't help but sympathize with many of the frustrations they are constantly facing. As the Laboratorium notes, many of the problems were computer related, and funny as hell. Its a fairly comprehensive list of computer problems, and its quite funny.
While many of those computer systems did have problems, it's important to note just how well NASA's aerospace applications work:
This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats: the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.Which is really how it should be for something that pilots a space shuttle, but then, writing software for such an focused set of criteria makes things somewhat easier to implement:
Admittedly they have a lot of advantages over the rest of the software world. They have a single product: one program that flies one spaceship. They understand their software intimately, and they get more familiar with it all the time. The group has one customer, a smart one. And money is not the critical constraint: the groups $35 million per year budget is a trivial slice of the NASA pie, but on a dollars-per-line basis, it makes the group among the nation's most expensive software organizations.The shuttle software group is one of just four outfits in the world to win the coveted Level 5 ranking of the federal governments Software Engineering Institute ( SEI ) a measure of the sophistication and reliability of the way they do their work. [Thanks to the Laboratorium and norton for all the info]
Posted by Mark on April 17, 2001 at 09:58 AM .: link :.
Monday, April 02, 2001
The Science Behind
The Science Behind the X-Files is quite well done. Several episodes are broken down into their various scientific elements which are further explained with referenced resources. Fun, informative, and geeky. Thanks to Nothing for pointing that site out. Nothing has a circuitry themed design similar to (and much better than) one of my first designs, except mine had NAND and NOR gates.
The Science Behind Merla's Cosmatron is also interesting. Remember Voltron? Who knew they were teaching me about sub-atomic particles... Those who examine the fake webcam pictures carefully have observed a Voltron-like object in the background...
Posted by Mark on April 02, 2001 at 07:44 PM .: link :.
Wednesday, March 07, 2001
Faster than a Speeding Bullet
Supercavitation essentially creates a gas bubble around all but the very nose of a projectile in order to virtually eliminate water drag and achieve high speeds (possibly breaking the sound barrier). The technology is real and the applications range from peaceful ocean farming and exploration of Jupiter's moon Europa to supercavitating weaponry like torpedoes and bullets. However, there appears to be plenty of obstacles (like steering, constantly changing pressures etc..) preventing such an occurrance. "Mastery of supercavitation could turn the quiet chess game of submarine warfare we know today into a mirror image of the hyper-kinetic world of aerial combat." The cinematic possibilites alone make this phenomenon intriguing. Imagine Top Gun under water. Take note, Hollywood. This could make the basis for a great movie. [thanks to F2 and metascene]
Posted by Mark on March 07, 2001 at 09:05 AM .: link :.
Friday, March 02, 2001
Its nice to see that someone writes lab reports the way I used to. I especially liked his conclusions: "Going into physics was the biggest mistake of my life. I should've declared CS. I still wouldn't have any women, but at least I'd be rolling in cash."
Posted by Mark on March 02, 2001 at 11:10 AM .: link :.
Tuesday, January 30, 2001
Ginger for Sale?
It seems that amazon is now taking orders for IT, otherwise known as Ginger. Of course, they still don't know what it is, what it does, or how much it will cost, but apparently doesn't stop people from buying it. The mystery thickens.
Posted by Mark on January 30, 2001 at 01:25 PM .: link :.
Monday, January 29, 2001
Read My Mind
Mind reading. It seems fantastical, but it may be true. A team of Italian neurophysiologists have discovered so called "mirror" neurons in the brain which seem to be firing in sympathy, reflecting or perhaps simulating the actions of other people. For instance, if I were to slap myself in the face, a certain set of neurons in my brain would be firing in order to make this act of stupidity happen. And if you happen to witness my moronic act, the very same set of neurons will fire in your brain (though you won't be slapping yourself silly). This discovery could go a long way in explaining things like why people are so damn imitative, how we developed language, and also why people can instantly understand how you are feeling just by observing your actions. Some people are referring to this as "mind reading", but it seems to be acting more like an advanced simulation to me. Basically, when I observe someone doing something, my brain instinctively simulates the action (by firing the appropriate neurons) and makes conclusions based on what happens. Though it may not be mind reading, it is certainly a big step forward for psychologists.
An interesting side note regarding mind reading. Some people believe we have an innate but repressed form of mind reading that sometimes surfaces in the form of "intuition" or even physical illness when faced with danger. The human brain only operates at somewhere around 10-20% efficiency, with occasional jumps to 25-30% (which is usually referred to as intuition or revelation and is associated with a possible decline in physical health). For instance, take this entry found in Wierd but True:
"train wrecks: in train wrecks the number of passengers in damaged cars is less than average by so much and so often that it cannot be a chance occurrence. somehow we know not to get on them. (work done by william cox and reported by lyall watson)"I've heard of similar statistics referring to airplanes as well. Many planes that crash are only half full; people who didn't get on the plane just had a "bad feeling" about it or actually got sick and were unable to fly. What are our brains really capable of?
Posted by Mark on January 29, 2001 at 02:58 PM .: link :.
Thursday, January 25, 2001
Faith in Mathematics
Why I Like Math By Matt Stone. Nice story of a man's search for meaning and finding it through mathematics ("I became aware of an underlying superstructure that tied all my math knowledge together. "). Why is it that people think religion is only comforting? Comfort is one aspect of religion, yes, but it is not everything. In many cases, I would even go so far as to say that religion is no more comforting than any other system of beliefs (be it scientific, atheistic, agnostic, or, in this case, mathematics). My naive optimism has more to do with my happiness than my religion (then again, I suppose religion has infulenced my optimism). In the end, I don't think religion is as important as most people think. It plays a small part in many aspects of life, but it does not (at least, it should not) dominate everthing. [via metascene]
Posted by Mark on January 25, 2001 at 09:25 AM .: link :.
Tuesday, January 09, 2001
What is the colour of five? What does blue taste like? Believe it or not, some people can answer these questions. These people have an rare variety of perception called synesthesia. Synesthesia literally means joined sensations, a condition that causes certain sensations to "leak" into one another. Its much deeper than a simple association or metaphor; synesthetes don't think about a sound when they see a colour, they actually hear the sound! This raises all sorts of questions regarding our view of the world and reality. Do we all have an innate form of synesthesia, possibly repressed? Who knows, but the more I think about this condition the less I'm suprised (and the more I realize how little we know about ourselves). Yet another bizarre scientific discovery...
Posted by Mark on January 09, 2001 at 04:44 PM .: link :.
Tuesday, December 26, 2000
Dr. Humanity or How I Learned to Stop Worrying and Love the Genome
The Human Genome in Human Context: Scientists recently announced that they had virtually completed the task of mapping the human genome. The implications of such an event vary. Some believe it will usher in a new era of Genetic Engineering, complete with a multitude of ethical fears such as the insurability of people with genetically identifiable risks for diseas or the creation of an entirely new form of Humanity. The author of the article believes that we really don't have much to worry about right now. While we may have mapped the genome, we have do not yet know how to apply it. Some quotes from the article:
"Enhancements in human abilities that may come through genetic engineering will in most cases be negligible compared to those already achieved, or achievable in the future, through tools."
"The problem is compounded by the fact that the relation of genes to traits is not one�to�one. Some traits are influenced by many genes, and some genes influence many traits. The law of unintended consequences is therefore bound to operate with a vengeance."
"...there is already quite conclusive evidence that human behavior, though strongly conditioned by genetics, is not completely determined by it. "All in all, a fascinating article and a refreshing change from the typical Horrors of Genetics diatribe. I don't think we'll be heading for a world like the one presented in the film Gattaca any time soon...
Posted by Mark on December 26, 2000 at 03:10 PM .: link :.
Thursday, December 21, 2000
Wierd but True
This site contains various (suprisingly insightful and referenced) blurbs about strange phenomena that occur. What an odd world we live in. Its amazing how little we know about it. [found in the bowels of kottke]
Posted by Mark on December 21, 2000 at 11:33 PM .: link :.
Friday, December 15, 2000
The Designer Universe
Do we live in a "designer universe"? The laws of nature seemed fine-tuned for conscious life to emerge; if the fundamental constants of physics are off by only a hair, the universe would have been a lifeless dud (no stars, no stable elements, etc...) This reminds me of one of Thomas Aquinas' 5 Ways (order in the universe implies an intelligent creator that we call God), and the finely tuned universe seems to support some sort of Cosmic Designer. However, the Cosmic Designer Hypothesis is only one way of explaining the improbable fine-tuning of natures laws (and it is flawed to begin with). Theres the "Big Fluke Hypothesis", which doesn't provide much of an explaination, and then there is the "Many Universes Hypothesis", which claims that there are, suprise, many universes (perhaps an infinite amount), the idea being that we live in the lucky one universe where everything came together. All the theories have their own advantages and disadvantages, and its quite fun to ponder why our world is the way it is...
Posted by Mark on December 15, 2000 at 01:11 PM .: link :.
Thursday, December 07, 2000
Taking Ballistics by Storm: An electronic gun with no mechanical parts that could theoretically fire a million rounds per minute. It was invented by former grocery wholesaler Mike O'Dwyer. I can't believe this guy, who has no formal education in ballistics, didn't kill himself while inventing this thing. [via usr/bin/girl]
Posted by Mark on December 07, 2000 at 05:14 PM .: link :.
Tuesday, December 05, 2000
Big Brother is Watching, Listening, Reading...
This one goes out to all the paranoid British visitors of my site: Apparently there is a Secret plan to spy on all British phone calls as well as emails and internet connections. Very scary.
Posted by Mark on December 05, 2000 at 03:56 PM .: link :.
Sunday, November 19, 2000
Just in time for the Holidays
Although their utility is unclear, just imagine what that guy who figured out the healing potential of testicles could do with this. Be afraid. Be very afraid.
Posted by Mark on November 19, 2000 at 10:26 PM .: link :.
Friday, November 17, 2000
None of them knew they were robots
Ok, we've already established that scientists are clever. We get it. Now, lets ponder how on earth they figure some of these things out. Scientist have recently discovered that they could help stroke victims recover more quickly by implanting testicle cells into patients' brains. What?! I want to know what possessed scientists to induce strokes in rats, then put testicle cells in their brain.
In mathematics news, there are signs that the Riemann hypothesis (probably the most famous problem in mathematics) is close to being proven. The Riemann hypothesis has to do with Prime Numbers and their distribution (it is speculated that their distribution is chaotic). Apparently, those clever scientists I keep marvelling at have found a link between the Riemann hypothesis and the physical world. If this connection proves to be true, it would be a huge boost (there are tons of proofs in mathematics that start: Assuming the Riemann hypothesis is true...) to our understanding of the universe...
Posted by Mark on November 17, 2000 at 02:06 PM .: link :.
Tuesday, October 17, 2000
Nirvana on the Freeway
Another interesting article concerning traffic congestion suggests that certain traffic densities "transform the whole mess into a state of crystalline harmony". However, this state is extremely sensitive, which probably explains why I have never witnessed said "crystalline harmony" in traffic.
I know this whole traffic jam situation seems hopeless, but this guy claims there is hope and he goes into fairly deep detail about the whole situation. This article is excellent, and I even tried some of his "solutions" and they appeared to get me to my destination quicker than usual, though I really fail to see how my driving can affect the people in front of me (though I can see that the people behind me are in a state of uniform movement, which is pretty damn cool).
Some things I noticed people doing in their cars while waiting at the Tollbooths of the PA Turnpike:
Posted by Mark on October 17, 2000 at 02:00 PM .: link :.
Monday, October 16, 2000
Since I have been spending the better part of my recent life stuck in traffic, I've become intrigued with the ebb and flow of stop and go. "Scientists said they are closer to comprehending the birth of the universe than the daily tie-ups along Interstate 66." Joy. It doesn't help that the road system is not being expanded to handle the increased volume (ie. more cars, no new roads). Then again, some say the problem is congestion, not lack of roads (more lanes means more congestion)... Not to mention that roads, specifically in the northeast, are in a constant state of (dis)repair due to increasing volume and the extremes of weather. More joy.
Posted by Mark on October 16, 2000 at 09:51 AM .: link :.
Friday, October 13, 2000
Most people are aware that scientists are bright guys. Wery intelligent, they are. But faster-than-light light? This is insanity (, max - 'or maybe its genius')! Apparently scientists have figured out a way to have light exit a box before it even enters. Mindbending shite. I need a drink.
BTW, Amazon is back to its old bloated self. Damn.
Posted by Mark on October 13, 2000 at 08:44 AM .: link :.
Where am I?
This page contains entries posted to the Kaedrin Weblog in the Science & Technology Category.
Kaedrin Beer Blog
12 Days of Christmas
2006 Movie Awards
2007 Movie Awards
2008 Movie Awards
2009 Movie Awards
2010 Movie Awards
2011 Fantastic Fest
2011 Movie Awards
2012 Movie Awards
2013 Movie Awards
6 Weeks of Halloween
Arts & Letters
Computers & Internet
Disgruntled, Freakish Reflections
Philadelphia Film Festival 2006
Philadelphia Film Festival 2008
Philadelphia Film Festival 2009
Philadelphia Film Festival 2010
Science & Technology
Security & Intelligence
The Dark Tower
Weird Book of the Week
Weird Movie of the Week
Copyright © 1999 - 2012 by Mark Ciocco.