Kaedrin.com
You are here: Kaedrin > Weblog > Archives > Culture

Culture
Sunday, December 15, 2013

The 2013 Egg Nog Tasting
A tradition born by accident, my family's Egg Nog tasting happens every Thanksgiving. One Thanksgiving many moons ago, thanks to poor coordination, everyone brought one or two Egg Nogs, and thus we ended up with, like, 14 different types. I'm not actually positive what year this really went into overdrive, but ever since that fateful year, we've actually planned to have that many Egg Nogs, and have even gone so far as to orchestrate a double blind tasting in order to determine the Best Egg Nog (the "worst" is usually a pretty easy and uncontroversial decision that does not require any real debate). I mean, we're not scientists here or anything, but this is pretty rigorous for a family gathering. I could have sworn I did a better job recapping each year's proceedings, but only a few previous tastings have been chronicled: [2012 | 2010 | 2008].

One thing we've noticed is that the same Egg Nogs tend to show up every year, and we've got a few that consistently win (notably local mainstays Wawa and Swiss Farms). Last year we made a rule that the previous year's winner (and "winner" of worst nog) could not return. This year we made a concerted effort to seek out completely new and obscure Egg Nogs. I was actually shocked at how well we did in this mission, though of course there were a couple repeats. So let's do this, the Egg Nogs of 2013:
2013 Egg Nogs
For posterity, the Egg Nogs pictured here are (from left to right):
  • Turkey Hill Egg Nog
  • America's Choice Holiday Favorite Egg Nog
  • Bolthouse Farms Limited Edition Holiday Nog (Low Fat)
  • Promised Land Old Fashioned Egg Nog
  • Trader Joes Egg Nog
  • Trickling Springs Creamery Farm Friend Fresh Egg Nog
  • Califa Farms Almondmilk Holiday Nog
  • Lehigh Valley Holiday Eggnog
  • Borden Eggnog
  • Silk Seasonal Nog
The only returning contenders were the Turkey Hill, which has pretty much always shown up (but always places somewhere in the middle of the pack), and the Silk Seasonal Nog (which has won "worst" in the past). The Borden was arguably a returning contender as well, though it's now packaged in a resealable container (Borden was always famous for being canned) and while they claim the recipe is the same, this stuff was nothing like the Borden of years past (which was also a middle of the pack performer). Indeed, the Borden was nearly toxic and came out a weirdly bright, almost glowing color. Gross.

But as bad as it was, Borden was still at least marginally identifiable as Egg Nog. One thing I've noticed about the competition for worst egg nog is that it is dominated by entries that aren't actually "egg" nog. They're always just "Holiday Nog" or "Seasonal Nog" or "Coconut Nog" or some such lie. These really aren't Egg Nogs, but they've got some nutmeg and they're trying to capitalize on the season. I guess that's fine for big Soymilk fans, but when you have these right next to real Egg Nog, that just makes them seem all the worse. This year's competition was between the Bolthouse Farms Low Fat Holiday Nog, which was packaged so deceptively that we didn't realize what it was until we nearly gagged on it. Silk was its normal self, but the real revolution in bad flavor belongs to Califa Farms Almondmilk Holiday Nog. It was so bad, I think it somehow hurt my eyeballs. The decision was unanimous.

The competition for best was a little better, though I do think the champions of years past (Wawa, Swiss Farms, Upstate Farms) would have trounced all of this year's competitors. Indeed, the normally middle of the pack Turkey Hill was a clear favorite heading into the blind tasting, which only featured three Egg Nogs this year: Turkey Hill, America's Choice (whose box sez "fa-la-la-la-yum", which became its unofficial name), and Promised Land (whose label proclaims "From the finest Jersey Cows"). It was close, but Promised Land came out the victor.
Egg Nogs
It was a fine year, but I think we need to have something like a Tournament of Champions next year, and bring back all the best Egg Nogs. I'm also toying with a rule that we should not accept "holiday nogs" that are not actually Egg Nog. Of course, that would limit options for the "worst" award, though I suppose "Light" egg nogs (or Borden!) could qualify. But maybe instead of worst, we bring back "flavored" egg nogs (which were banned several years ago). We'll have to wait until next year. But me, I'm going to hit up Wawa sometime this week and get some real Egg Nog...
Posted by Mark on December 15, 2013 at 08:57 PM .: Comments (3) | link :.


End of This Day's Posts

Sunday, November 24, 2013

Men, Women, and Chain Saws
In 1980, Gene Siskel and Roger Ebert hosted a special edition of their Sneak Previews PBS show, and used the opportunity to decry an emerging "Women in Danger" genre of horror thrillers:
It's important to note that this was only the opening salvo of exploitation horror. New technology, changes in distribution, the continuing emergence of independent filmmaking, and a host of other factors lead to a glut of popular yet despised horror films. The dominant sub-genre of these films was the Slasher film, but Siskel and Ebert were talking about this so early in the process that the much maligned sub-genre hadn't even been named yet. There is something prescient about the two film critics putting this episode together when they did. The heyday of the slasher was only beginning and would last another three years before it even started to subside.

Indeed, it must have been more than a little odd to have been present while all of this was happening. I actually like slasher movies and have watched a lot of them during my annual Six Weeks of Halloween horror movie marathon, but even I would probably have had a different reaction back in 1980. Apparently one of the things that prompted Siskel and Ebert to dedicate a show to the behavior of the crowd during the film I Spit On Your Grave, as they shouted and cheered the rape sequences in the film. That has to be a disturbing way to watch a movie. But with time and perspective, things have changed a bit.

Enter Carol Clover, a Professor at UC Berkely, who wrote several essays on horror films that have since been collected in the book Men, Women, and Chain Saws: Gender in the Modern Horror Film:
This book began in 1985 when a friend dared me to go see The Texas Chain Saw Massacre. I was familiar with the horror classics and with stylish or "quality" horror (Hitchcock, De Palma, and the like), but exploitation horror I had assiduously avoided. Seeing Texas was a jolting experience in more ways than one. (Page 19)
Alerted to the genre, she started to explore territory she had avoided, and "against all odds" she has "ended up something of a fan". She certainly doesn't go too easy on the genre, and in many ways, her critiques mirror Siskel and Ebert's, but perhaps with the perspective of time, she has also found value in these films, and she did so at a time when they were universally reviled and never given much of a thought. Her essay on slasher films first appeared in 1987 (just as the genre was in its final death throes) and was revised in this book in 1992, and immediately changed the landscape. In this essay, Clover coins the term "Final Girl", and notes that even if audiences identify with or cheer on the killer early in the film, they always experience a reversal as the Final Girl fights back. Reading this now, it seems odd that anyone would be surprised that a male viewer could relate to a female protagonist, but this was apparently a surprising thing that people were still working through. As Erich Kuersten notes: "I wasn't afraid for girls, or of girls, I was afraid through girls."

Again, the fact that Clover finds value here does not mean she's blind to the issues with slasher films, but she also thinks its worth discussing:
One is deeply reluctant to make progressive claims for a body of cinema as spectacularly nasty toward women as the slasher film is, but the fact is that the slasher does, in its own perverse way and for better or worse, constitute a visible adjustment in the terms of gender representation. (Page 64)
Clover's slasher essay shines a light on a reviled sub-genre, and is clearly the centerpiece of the book, but there are several other chapters, all filled with similarly insightful looks at various sub-genres of horror. In one, she tackles occult films, with a focus on possession films like The Exorcist, and contrasts with the slasher:
It is in comparison with the slasher film that the occult film (above all the possession film) comes into full focus. Both subgenres have as their business to reimagine gender. But where the slasher concerns itself, through the figure of the Final Girl, with the rezoning of the feminine into territories traditionally occupied by the masculine, the occult concerns itself, through the figure of the male-in-crisis, with a shift in the opposite direction: rezoning the masculine into territories traditionally occupied by the feminine. (Page 107)
I don't always buy into all of this, but then, I came of age when all these films were playing on cable. I grew up with strong Final Girls, so the notion that "strength" would ever be "gendered masculine" seems a little silly to me, but perhaps 30-40 years ago, that was not the case (and vice versa for the male-in-crisis movies). I probably never would have used the same terminology or articulated in the same way, but I've clearly internalized these notions.

There is a chapter on Rape Revenge films, which I am actually not very well versed in (because I was reading this, I watched I Spit On Your Grave this year), but which makes a fair amount of sense. It's easy to see why these movies are controversial, especially something like I Spit, but Clover manages to find value in these films (one of which includes the all male Deliverance) and makes all sorts of clever observations about commonalities in the genre (in particular, there isn't just a male/female dichotomy in these films, but also a city/country or sophisticated/redneck component to the rape and revenge). Finally, there is a chapter on "The Eye of Horror", which spends a lot of time looking at perspective shots and "gazes."

It's a fascinating book, filled with interesting observations and a motivated perspective. There are certainly nits to pick (for instance, at one point, she claims that Werewolf stories are about a fear of being eaten by an animal, which I guess is there, but the real fear is becoming a werewolf yourself, losing control, being overwhelmed by your animal desires, etc... The enemy within, and all that...) and I don't always agree with what she's asserting, especially when she starts down the rabbit hole of Freudian analysis and some of the broader topics like "gazes" and "rape culture" and so on. I could quibble with some of her key films in each chapter (she perhaps overestimates The Texas Chainsaw Massacre 2 and its impact on the genre, though it's clearly a great example for Clover's thesis) and the notion of closely observing a few films and extrapolating that into an entire sub-genre will always cause some dissonance, but Clover clearly did her homework and has seen not only the famous horror movies, but also her fair share of obscure ones. Like the Bechdel Test, the perspective here is narrowed to gender, which of course, isn't the only perspective to have while watching movies. Also like the Bechdel test*, there's this notion that you have to take individual examples of something and treat it as a representative of a much broader trend. This doesn't make these analyses any less interesting though!

When you look at Siskel and Ebert's response to these films, then Clover's response (years later and with some unique perspectives), it's easy to see how much we inform our reactions to film ourselves. Siskel and Ebert saw only misogyny, which is not entirely incorrect, but Clover looked at the films differently and managed to find value. I think a lot of people would find both analyses absurd, and they wouldn't be entirely wrong about that either. People often complain that critics never represent the mainstream, perhaps because the mainstream never really concerns itself with context or perspective. They're looking to be entertained for a few hours on a Friday night, not discuss the reversal of gender politics or other such high-minded affairs. In the end, a book like Men, Women, and Chain Saws probably says just as much about Carol Clover as it does about the films themselves. You see what you want to see in movies, and while that can be interesting, that's not always the whole story.
To a remarkable extent, horror has come to seem to me not only the form that most obviously trades in the repressed, but itself the repressed of mainstream filmmaking. When I see an Oscar-winning film like The Accused or the artful Alien and its blockbuster sequel Aliens, or, more recently, Sleeping with the Enemy and Silence of the Lambs, and even Thelma and Luise, I cannot help thinking of all the low-budget, often harsh and awkward but sometimes deeply energetic films that preceded them by a decade or more - films that said it all, and in flatter terms, and on a shoestring. If mainstream film detains us with niceties of plot, character, motivation, cinematography, pacing, acting, and the like, low or exploitation horror operates at the bottom line, and in so doing reminds us that every movie has a bottom line, no matter how covert or mystified or sublimated it may be. (Page 20)
* Interestingly, horror movies tend to pass the Bechdel test at a much higher rate than most other genres (just shy of 70% pass the test, as compared to stuff like Westerns or Film Noir, where it's more like 25%). This says nothing about the quality of the films or their feminist properties, but it's an interesting note...
Posted by Mark on November 24, 2013 at 01:18 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, November 17, 2013

On The Bechdel Test
For the uninitiated, the Bechdel Test is meant to gauge the presence of female characters in film. In order to pass the test, a film must meet three requirements:
  1. It has to have at least two [named] women in it
  2. Who talk to each other
  3. About something besides a man
The test is named after Alison Bechdel, a cartoonist who formulated the rule as a setup to a punchline in a 1985 comic strip for Dykes to Watch Out For (the punchline: "Last movie I was able to see was Alien...") There are many variants to the rules, but the one listed above seems to be the most common - it adds a requirement that the two female characters have to be "named" to avoid counting stuff like a female clerk giving a woman change or something (a reasonable addition). It has slowly but surely ingrained itself into the popular culture, especially on the internet in the past few years. Indeed, it's become so popular that it's now frequently used incorrectly!

BechdelTest.com seems to be the best resource for this sort of thing, and the statistics are interesting. Out of 4570 movies, only 2555 (55.9%) pass the test. The trend does seem to be (very slowly) improving over time, but it's a pretty dismal portrait.

The Bechdel Test is far from perfect (more on that in a bit), but I do find it to be interesting for two reasons:
  • It's objective. Discussions of identity politics seem to angry up the blood, especially on the internets, so the removal of any subjectivity from the test is a good thing. These are facts here, not opinions.
  • It really does illustrate a certain type of gender imbalance in film. This is an important observation, if not the end-all-and-be-all of feminist criticism.
Alas, there are some rather severe limitations on this test:
  • It says nothing about the quality of the film in question. For instance, Citizen Kane and Casablanca fail the test. On the other hand, The Mortal Instruments: City of Bones (12% on Rotten Tomatoes) and The Smurfs 2 (14% on Rotten Tomatoes) pass.
  • It says nothing about how "feminist friendly" the film is. For instance: Showgirls passes the test, and while I don't have a specific reference for this next one, I'm positive that there are lesbian porn movies (made explicitly for the titillation of men) that would pass the test too.
This isn't mainsplaining or patriarchy speaking, these are acknowledged limitations of the test. Of course, finding ironic counterexamples is missing the point. It's not that a given movie passes or fails the test, what matters is when you look at the film industry as a whole.

This, however, is the biggest flaw of the test. It's a macro test applied at the micro scale. The test says nothing about an individual film's worth (feminist or not), but the test must be applied to individual films. This leads to a whole boatload of misunderstandings and misguided attempts to tarnish (or praise) a movie because it failed (or passed) the Bechdel test. BechdelTest.com is filled with objections to a given rating and debate about whether an individual film is feminist enough to pass and other such misunderstandings of the rules (for instance: something can't "barely" pass, it either does or doesn't). This account of two students attempting to dominate their class by using the Bechdel Test to dismiss any film that didn't pass is another demonstration. "They labeled any film that didn't pass the test as unworthy of praise and sexist. ... I'm not exaggerating in that statement, the pair literally dismissed Citizen Kane altogether and praised Burlesque." (Of course, as the first commenter notes, both the account and the two students were applying the test incorrectly). Swedish movie theaters are instituting a new rating system that labels films that have passed (I'm not entirely clear of the implications here, but it's still kinda missing the point).

The list could go on and on, but severe limitations like this make it clear that the Bechdel Test has a limited application. Of course, there's nothing wrong with that and it does illustrate something about the industry, but let's stop applying it where it doesn't belong.

Some other assorted thoughts on the Bechdel Test:
  • One of the things that has always irked me about the test is the lack of a stated baseline. I'd be curious to see what a "reverse" Bechdel Test would show, and I think it would give greater context to the numbers being thrown around. Yeah, a 55.9% pass rate sounds low, but what if the "reverse" test showed a similar number for male representation in movies? Of course, it's blindingly obvious that the male rate is significantly higher (my guess: 80%-90%), but it's worth noting that just because a movie fails the Bechdel Test doesn't mean it would pass the reverse test (in particular, I think the "about something other than a man/woman" rule would hit both sexes in the same movie pretty often. Having a baseline would better underscore the issue.
  • It's ironic that one of the test's biggest strengths, it's objectivity, is also one of its biggest weaknesses. This, however, is true for just about any objective measurement ever conceived (i.e. not just for film). Objective measurements only ever tell a small proportion of the story, and you can't judge an individual movie's worth by checking boxes on a form (unless those boxes are for subjective measurements). If the Bechdel Test is your only way of evaluating movies, you will get a very myopic view of the industry.
  • Are there better, simpler metrics that could illustrate a similar issue? For instance, in an industry where the Auteur theory seems to be generally accepted, the director of the film is considered to be the primary author. Guess how many movies are directed by women? It's somewhere on the order of 5%-10%, and most of them are tiny indies that you've never heard of... When you add in writers, producers, editors, etc... the numbers are still pretty low.
  • So what to do about the Bechdel Test results? I imagine this is where most arguments get really heated. I don't know the answer, but given the above bullet, it looks to me like we need more female filmmakers. Artists tend to focus on what they know, and since the grand majority of filmmakers are men, it's not surprising that female representation is low. How this would happen is a can of worms in itself...
  • It strikes me that the misunderstandings and limitations surrounding the Bechdel test are emblematic of debate surrounding identity politics in general. In particular, the resolution of individual/group dynamics is what trips a lot of people up (i.e. the Bechdel test says nothing important about individual movies, only groups of movies, yet because of the need to apply the Bechdel test at an individual level, the discussion often stays at that level). When it comes to insidious systemic issues like this, there's a narrow line to walk, and it's very easy to veer off the path.
Well, I think I've blabbered on long enough. What say you?
Posted by Mark on November 17, 2013 at 01:30 PM .: Comments (0) | link :.


End of This Day's Posts

Wednesday, September 04, 2013

Extra Hot Great
I enjoy listening to podcasts, but with a couple of notable exceptions, they tend to be relatively short lived affairs. I get the impression that they are a ton of work, with little payoff. As such, I've had the experience of discovering a podcast that I think is exceptional, only to have it close doors within a month or two of my discovery. Often, there is a big back catalog, which is nice, but it's still depressing that no new episodes are being made. Again, I can't really fault anyone for quitting their podcast - it seems like a lot of work and the general weekly schedule that is seemingly required in order to maintain an audience doesn't make it any easier.

Extra Hot Great was one of those podcasts that I discovered about a month before they decided to call it quits. They had about a year and a half of back episodes, and I really came to love that podcast. Well, the reason they stopped the podcast was that two of the principle players were starting a new business venture in LA, a website called Previously.tv (I have linked to several of my favorite articles from them over the past few months). If you like television, the site is well worth your time.

And now we can all rejoice, because they've brought back the Extra Hot Great podcast! It is, more or less, the same format as the old classic episodes. A topic or two (usually a show or news item), with some irregular but recurring features inbetween (my favorite being "I am not a crackpot", a Grampa Simpson inspired segment where someone lays out their crackpot idea), followed by Game Time, where they come up with absurdly comprehensive and sometimes complicated movie/television/pop culture quizzes and compete against one another (the thing that makes this segment work so well is that Tara and Joe know their shit way better than you, but are probably about equivalent with each other). The old EHG podcast shuffled between movies and TV, but I'm not sure if the Previously.tv incarnation will focus more on TV or not. Nevertheless, I'm excited to see a beloved defunct podcast brought back from the dead, and you should be too!

And while you're at it, take note of your favorite podcasts and enjoy them while you can - maybe write them a good iTunes review, or drop something in the tipjar or something. Chances are, they won't be around forever! For reference, here's my regular stable of podcasts, you should listen to these too!
Posted by Mark on September 04, 2013 at 06:29 PM .: link :.


End of This Day's Posts

Sunday, August 25, 2013

Reinventing The Movie Theater Experience (And Shushing)
A few weeks ago, Hunter Walk posted a short blog post about reinventing the movie theater by allowing wifi, outlets, low lights, and second screen experience:
Some people dislike going to the movies because of price or crowds, but for me it was more of a lifestyle decision. Increasingly I wanted my media experiences plugged in and with the ability to multitask. Look up the cast list online, tweet out a comment, talk to others while watching or just work on something else while Superman played in the background. Of course these activities are discouraged and/or impossible in a movie theater.

But why? Instead of driving people like me away from the theater, why not just segregate us into environments which meet our needs. ... If you took a theater or two in a multiplex and showed the types of films which lend themselves to this experience I bet you'd sell tickets. Maybe even improve attendance during the day since I could bang out emails with a 50 foot screen in front of me.
Personally, this experience holds little to no interest to me (I can do that at home pretty easily), but I can see why it would be attractive to the Hunter Walk's of the world (he's a venture capitalist with kids and very little free time) and the notion of creating separate theaters for this sort of experience is fine for me (so long as the regular experience remains available). I mean, I probably wouldn't partake in this sort of thing, but if there's a market for this, more power to the theaters that can capture that extra revenue.

Of course, that's not the reaction that Walk got from this post, which went much further and wider than I think he was expecting. It looks to me like a typical personal blog post and thought experiment(probably jotted out quickly on a second screen, heh), but it got picked up by several media outlets and the internet lost its collective shit over the suggestion. Some responses were tame, but many went for hyperbole and straw-manned Walk's idea. He wrote a followup post responding to many comments, and again, I find Walk's perspective perfectly reasonable. But then things exploded.

As cuckoo-nutso as this debate already was, Anil Dash came along and lobbed a grenade into the discussion.
Interestingly, the response from many creative people, who usually otherwise see themselves as progressive and liberal, has been a textbook case of cultural conservatism. The debate has been dominated by shushers, and these people aren't just wrong about the way movies are watched in theaters, they're wrong about the way the world works.
This is a bit extreme, but maybe I can follow this. People do refer to texters and the like as "heathens" and joke about the "downfall of society" as represented by rude people at theaters. Then he goes here:
This list of responses pops up all the time, whether it's for arguing why women should not wear pants, or defending slavery, or trying to preserve a single meaning for the word "ironic", or fighting marriage equality, or claiming rap isn't "real" music, or in any other time when social conservatives want to be oppressive assholes to other people.
Zuh? What the hell is he talking about? Is he really equating people who shush other people in movie theaters with people who defend slavery? I suppose he's trying to show a range of attitudes here, but this is absolutely ridiculous, and the entire thing is premised on a straw man of epic proportions. Dash goes on:
People who have fun at the movies can make almost any movie better. When the first Transformers movie came out, one of the key moments in the film is the first time the leader of the Autobots transforms in grand fashion from tractor trailer to giant robot, and pronounces "I am Optimus Prime". At that precise moment, the guy next to me, a grown man in his early 30s, rose to his feet and shouted "YEAH!" while punching his fist in the air. I could see from his sheer emotion that he’d been waiting for this day, to hear this voice say those words, since the moment his stepdad walked out on his mother. This was catharsis. This was truly cinematic.
Dash is absolutely correct here, but, um, that's not the sort of thing people are complaining about. He's positioning shushers as people who disapprove of emotional responses to movies, as if people get shushed for laughing at a comedy or pumping their fist and shouting "Yeah!" during rousing action sequences. Of course, no one is complaining about that. Even at the most venerated theaters that treat the moviegoing experience with reverence and awe, like the Alamo Drafthouse, actively encourage such behavior! More:
The shushers claim that not giving a film on the screen one's undivided attention is apparently unspeakably offensive to the many hardworking scriptwriters and carpenters and visual effects supervisors who made the film. Yet these very same Hollywood artists are somehow able to screw up their courage, grimly set their jaws with determination, and bravely carry on with their lives even when faced with the horrible knowledge that some people will see their films in a pan-and-scan version on an ancient CRT screen of an airplane that has an actual jet engine running in the background behind their careful sound mix. Profiles in courage.
This is, at best, a secondary concern. The complaint isn't about the filmmakers, it's about the other people in the theater. If you take your phone out in a dark theater and start talking (or texting), it's taking away from the experience for everyone surrounding that person in the theater. Someone who laughs during a comedy or shouts "Yeah!" during an action movie? They're contributing to the fun experience. Someone who's talking to their spouse on the phone (at full volume) about tomorrow's dinner party is seriously fucking with the people around them. You think I'm joking? That very experience happened to me last night during a screening of You're Next (i.e. a horror film that is constantly building and releasing tension, often through silence).
It'd be easier for you to have exactly the hermetically sealed, human-free, psychopathic isolation chamber of cinematic perfection that you seek at home, but if you want to try to achieve this in a public space, please enjoy the Alamo Drafthouse or other excellent theaters designed to accommodate this impulse.
Again, no one is asking for hermetically sealed isolation chambers. At the aforementioned You're Next screening, there were plenty of other people who were clearly into the movie that would occasionally blurt out "No, don't split up!" or groan in empathetic horror when something violent happened - and those things added to the experience. The asshole talking about his rump roast with his spouse, was NOT. Incidentally, no one "shushed" that fucker, which leads me to wonder who the hell Dash is referring to when he talks about these mythical "shushers".

Incidentally, the theater chain that Dash mentions as if it promotes this isolation is Alamo Drafthouse, which is indeed very intolerant of texting and rude behavior in theaters. But it isn't hermetically sealed at all. For crying out loud, it's got a full service restaurant thing going on, with people constantly walking in and out of the theater, eating food, and drinking beer. People are getting drunk at these theaters, and having a great time. I have no idea where Dash is getting this isolation thing from. Also, he mentions the Alamo Drafthouse as if there's one in every neighborhood. I'd happily go to one if it existed within a hundred miles of my house, but there's only 24 theaters in the country (16 of which are in Texas). And again, the only difference between the Alamo Drafthouse and every other theater in the country is that they have the manpower to actually enforce their rules (since waiters are in and out of the theater, they can see troublemakers and do something about it, etc...)
The intellectual bankruptcy of this desire is made plain, however, when the persons of shush encounter those who treat a theater like any other public space. Here are valid ways to process this inconsistency of expectation:
  • "Oh, this person has a different preference than I do about this. Perhaps we should have two different places to enjoy this activity, so we can both go about our business!"
  • "It seems that group of people differs in their standard of how to behave. Since we all encounter varying social norms from time to time, I'll just do my thing while they do theirs."
  • "I'm finding the inconsistency between our expectations about this experience to be unresolvable or stressful; Next time we'll communicate our expectations in advance so everyone can do what she or he enjoys most."
But shushers don't respond in any of these ways. They say, "We have two different expectations over this public behavior, and mine is the only valid way. First, I will deny that anyone has other norms. Then, when incontrovertibly faced with the reality that these people exist, I will vilify them and denigrate them. Once this tactic proves unpersuasive, I will attempt to marginalize them and shame them into compliance. At no point will I consider finding ways for each of us to accommodate our respective preferences, for mine is the only valid opinion." This is typically followed by systematically demonstrating all of the most common logical fallacies in the process of denying that others could, in good conscience, arrive at conclusions other than their own.
If steam wasn't already shooting out of your ears in frustration at Dash's post, this is where the post goes completely off the rails. The hypocrisy is almost palpable. Let's start with the fact that most movie theaters are not, in fact, public spaces. They are privately owned buildings, and wonder among wonders, the owners have defined general guidelines for behavior. When was the last time you went to a movie theater and DIDN'T see a plea to turn off your fucking phone at the beginning of the movie. In other words, theaters "communicate our expectations in advance" of every movie they show. It's the Dash's of the world who are ignorant here. This is precisely why I wasn't that upset with Hunter Walk's original suggestion: If a theater wants to allow texting and talking and second screen experience, more power to them. Every theater I've ever been to has pleaded with me to consider the other people in the theater and, you know, try not to ruin other people's experience.

Dash's stance here is incomprehensible and hypocritical. What makes rude people's differing standards more relevant than the "shushers"? He calls shushers "bullies" in this post, but they're simply trying to uphold the standards of the theater. Why are rude people entitled to ruin the experience for everyone else in the theater? I honestly have no idea how someone like Anil Dash, who I know for a fact is a smart, erudite man (from his other writings), could possibly think this is an acceptable argument.
Amusingly, American shushers are a rare breed overall. The most popular film industry in the world by viewers is Bollywood, with twice as many tickets sold in a given year there as in the United States. And the thing is, my people do not give a damn about what's on the screen.

Indian folks get up, talk to each other, answer phone calls, see what snacks there are to eat, arrange marriages for their children, spontaneously break out in song and fall asleep. And that's during weddings! If Indian food had an equivalent to smores, people would be toasting that shit up on top of the pyre at funerals. So you better believe they're doing some texting during movies. And not just Bollywood flicks, but honest-to-gosh Mom-and-apple-pie American Hollywood films.
He's right, American shushers are a rare breed - I think I may have seen people get shushed 2 or 3 times in my life. And I see a TON of movies, to the point where examples of people doing rude things like talking about other subjects, answering their phone, etc... are countless. Usually, people just grin and bear it. And then give up going to the theater. Why spend $30-$40 to see a movie with a friend when you'll just get frustrated by assholes doing rude shit during the entire movie?

India sounds like a horrible place to see a movie, but whatever. I imagine these theaters are pretty clear about what this experience is going to be like, so fine. Is anyone shushing Indians in those theaters? I find that hard to believe. But we're not talking about India, are we? They clearly have different cultural norms than we do in America, and that's awesome!
So, what can shushers do about it? First, recognize that cultural prescriptivism always fails. Trying to inflict your norms on those whose actions arise from a sincere difference in background or experience is a fool's errand.
Someone is attempting to force their culture on someone else here, and it's not the shushers. Dash clearly likes the way things work in India, and is arguing that we should adopt that here. If he's talking about creating separate theaters for his preferred experience, then go for it! We'll let the market sort out what people like. I'll even concede that Dash could be right and his partial attention theaters will swallow up traditional American theaters whole. Of course, in that situation, I'll probably never go to a theater again, but such is life.
Then, recognize your own privilege or entitlement which makes you feel as if you should be able to decide what’s right for others. There's literally no one who's ever texted in a movie theater who has said "Every other person in here must text someone, right now!" Because that would be insane. No one who would like to have wifi at a theater has ever said "Those who don't want to connect should just stay at home!" Because they're not trying to force others to comply with their own standards.
They're not forcing me to text to talk on the phone, but they ARE forcing me to listen to them talk or see them text. Perhaps if we were talking about a true public space, this would be the case, but we're not. The private owners of these theaters are asking you not to do this, therefore the entitlement is on the texters and talkers.

Dash has since written a followup that is much more reasonable (it makes me wonder if his initial post was just link-bait or some other cynical exercise), and again, I agree with the idea of producing new theaters around this concept. They may even experience some success. I just won't be going to any of them.
Posted by Mark on August 25, 2013 at 10:25 AM .: link :.


End of This Day's Posts

Wednesday, July 31, 2013

Serendipity (Again)
Every so often, someone posts an article like Connor Simpson's The Lost Art of the Random Find and everyone loses their shit, bemoaning the decline of big-box video, book and music stores (of course, it wasn't that long ago when similar folks were bemoaning the rise of big-box video, book and music stores for largely the same reasons, but I digress) and what that means for serendipity. This mostly leads to whining about the internet, like so:
...going to a real store and buying something because it caught your eye, not because some algorithm told you you'd like it — is slowly disappearing because of the Internet...

...there is nothing left to "discover," because the Internet already knows all. If you "find" a new bad thing, it's likely on a blog that millions of other people read daily. If you "find" a new movie, like the somehow-growing-in-popularity Sharknado, it's because you read one of the millions of blogs that paid far too much attention to a movie that, in the old days, would have gone straight into a straight-to-DVD bargain bin.
I've got news for you, you weren't "discovering" anything back in the day either. It probably felt like you were, but you weren't. The internet is just allowing you to easily find and connect with all your fellow travelers. Occasionally something goes viral, but so what? Yeah, sometimes it sucks when a funny joke gets overtold, but hey, that's life and it happens all the time. Simpson mentions Sharknado as if it came out of nowhere. The truth of the matter is that Sharknado is the culmination of decades of crappy cult SciFi (now SyFy) movies. Don't believe me? This was written in 2006:
Nothing makes me happier when I'm flipping through the channels on a rainy Saturday afternoon than stumbling upon whatever god-awful original home-grown suckfest-and-craptasm movie is playing on the Sci-Fi Channel. Nowhere else can you find such a clusterfuck of horrible plot contrivances and ill-conceived premises careening face-first into a brick wall of one-dimensional cardboard characters and banal, inane, poorly-delivered dialogue. While most television stations and movie production houses out there are attempting to retain some shred of dignity or at least a modicum of credibility, it's nice to know that the Sci-Fi Channel has no qualms whatsoever about brazenly showing twenty minute-long fight scenes involving computer-generated dinosaurs, dragons, insects, aliens, sea monsters and Gary Bussey all shooting laser beams at each other and battling for control of a planet-destroying starship as the self-destruct mechanism slowly ticks down and the fate of a thousand parallel universes hangs in the balance. You really have to give the execs at Sci-Fi credit for basically just throwing their hands up in the air and saying, "well let's just take all this crazy shit and mash it together into one giant ridiculous mess". Nothing is off-limits for those folks; if you want to see American troops in Iraq battle a giant man-eating Chimaera, you've got it. A genetically-altered Orca Whale the eats seamen and icebergs? Check. A plane full of mutated pissed-off killer bees carrying the Hanta Virus? Check. They pull out all the stops to cater to their target audience, who are pretty much so desensitized to bad science-fiction that no plot could be too over-the-top to satiate their need for giant monsters that eat people and faster-than-light spaceships shaped like the Sphynx.
And as a long time viewer of the SciFi/SyFy network since near its inception, I can tell you that this sort of love/hate has been going on for decades. That the normals finally saw the light/darkness with Sharknado was inevitable. But it will be short-lived. At least, until SyFy picks up my script for Crocoroid Versus Jellyfish.

It's always difficult for me to take arguments like this seriously. Look, analog serendipity (browsing the stacks, digging through crates, blind buying records at a store, etc...) obviously has value and yes, opportunities to do so have lessened somewhat in recent years. And yeah, it sucks. I get it. But while finding stuff serendipitously on the internet is a different experience, but it's certainly possible. Do these people even use the internet? Haven't they ever been on TV Tropes?

It turns out that I've written about this before, during another serendipity flareup back in 2006. In that post, I reference Steven Johnson's response, which is right on:
I find these arguments completely infuriating. Do these people actually use the web? I find vastly more weird, unplanned stuff online than I ever did browsing the stacks as a grad student. Browsing the stacks is one of the most overrated and abused examples in the canon of things-we-used-to-do-that-were-so-much-better. (I love the whole idea of pulling down a book because you like the "binding.") Thanks to the connective nature of hypertext, and the blogosphere's exploratory hunger for finding new stuff, the web is the greatest serendipity engine in the history of culture. It is far, far easier to sit down in front of your browser and stumble across something completely brilliant but surprising than it is walking through a library looking at the spines of books.
This whole thing basically amounts to a signal versus noise problem. Serendipity is basically finding signal by accident, and it happens all the damn time on the internet. Simpson comments:
...the fall of brick-and-mortar and big-box video, book and music stores has pushed most of our consumption habits to iTunes, Amazon and Netflix. Sure, that's convenient. But it also limits our curiosity.
If the internet limits your curiosity, you're doing it wrong. Though I guess if your conception of the internet is limited to iTunes, Amazon, and Netflix, I guess I can see why you'd be a little disillusioned. Believe it or not, there is more internet out there.

As I was writing this post, I listened to a few songs on Digital Mumbles (hiatus over!) as well as Dynamite Hemmorage. Right now, I'm listening to a song Mumbles describes as "something to fly a mech to." Do I love it? Not really! But it's a damn sight better than, oh, just about every time I blind bought a CD in my life (which, granted, wasn't that often, but still). I will tell you this, nothing I've listened to tonight would have been something I picked up in a record store, or on iTunes for that matter. Of course, I suck at music, so take this all with a grain of salt, but still.

In the end, I get the anxiety around the decline of analog serendipity. Really, I do. I've had plenty of pleasant experiences doing so, and there is something sad about how virtual the world is becoming. Indeed, one of the things I really love about obsessing over beer is aimlessly wandering the aisles and picking up beers based on superficial things like labels or fancy packaging (or playing Belgian Beer Roulette). Beer has the advantage of being purely physical, so it will always involve a meatspace transaction. Books, movies, and music are less fortunate, I suppose. But none of this means that the internet is ruining everything. It's just different. I suppose those differences will turn some people off, but stores are still around, and I doubt they'll completely disappear anytime soon.

In Neal Stephenson's The System of the World, the character Daniel Waterhouse ponders how new systems supplant older systems:
"It has been my view for some years that a new System of the World is being created around us. I used to suppose that it would drive out and annihilate any older Systems. But things I have seen recently ... have convinced me that new Systems never replace old ones, but only surround and encapsulate them, even as, under a microscope, we may see that living within our bodies are animalcules, smaller and simpler than us, and yet thriving even as we thrive. ... And so I say that Alchemy shall not vanish, as I always hoped. Rather, it shall be encapsulated within the new System of the World, and become a familiar and even comforting presence there, though its name may change and its practitioners speak no more about the Philosopher's Stone." (page 639)
In this Slashdot interview, Stephenson applies the same "surround and encapsulate" concept to the literary world. And so perhaps the internet will surround and encapsulate, but never destroy, serendipitous analog discovery. (hat tip to the Hedonist Jive twitter feed)
Posted by Mark on July 31, 2013 at 10:43 PM .: link :.


End of This Day's Posts

Sunday, July 28, 2013

Science Friction
Pharaoh's curse: Why that ancient Egyptian statue moves on its own - Museum curators have noticed something odd about an ancient (3000+ years old) Egyptian statue in the Manchester Museum in England. It appears to be moving all on its own despite being locked in a case, and they've actually captured the whole thing on time-lapsed video.
The 10-inch (25-centimeter) statue was acquired by the museum in 1933, according to the New York Daily News. The video clearly shows the artifact slowly turning counterclockwise during the day, but remaining stationary at night. ...

Oddly, the statue turns 180 degrees to face backward, then turns no more. This led some observers to wonder if the statue moves to show visitors the inscription on its back, which asks for sacrificial offerings "consisting of bread, beer, oxen and fowl."
Well, as sacrificial offerings go, at least those seem pretty tame. Scientists and curators have come up with tons of hand-wavey explanations, usually involving magnetism or vibrations from passing tourists' footsteps ("vibrational stick-slip friction"), but nothing seems to fit particularly well. It seems poised to remain a mystery, which is, you know, kinda freaky. (via File 770)
Posted by Mark on July 28, 2013 at 08:35 PM .: link :.


End of This Day's Posts

Wednesday, May 29, 2013

The Irony of Copyright Protection
In Copyright Protection That Serves to Destroy, Terry Teachout lays out some of the fundamental issues surrounding the preservation of art, in particular focusing on recorded sound:
Nowadays most people understand the historical significance of recorded sound, and libraries around the world are preserving as much of it as possible. But recording technology has evolved much faster than did printing technology—so fast, in fact, that librarians can't keep up with it. It's hard enough to preserve a wax cylinder originally cut in 1900, but how do you preserve an MP3 file? Might it fade over time? And will anybody still know how to play it a quarter-century from now? If you're old enough to remember floppy disks, you'll get the point at once: A record, unlike a book, is only as durable as our ability to play it back.
Digital preservation is already a big problem for current librarians, and not just because of the mammoth amounts of digital data being produced. Just from a simple technological perspective, there are many non-trivial challenges. Even if the storage medium/reading mechanisms remain compatible over the next century, there are nontrivial challenges with ensuring these devices will remain usable that far into the future. Take hard drives. A lot of film and audio (and, I suppose books these days too) are being archived on hard drives. But you can't just take a hard drive and stick it on a shelf somewhere and fire it up in 30 years. Nor should you keep it spinning for 30 years. It requires use, but not constant use. And even then you'll need to ensure redundancy because hard drives fail.

Just in writing that, you can see the problem. Hard drives clearly aren't the solution. Too many modes of failure there. We need something more permanent. Which means something completely new... and thus something that will make hard drives (and our ability to read them) obsolete.

And that's from a purely technological perspective. They're nontrivial, but I'm confident that technology will rise to the challenge. However, once you start getting into the absolutely bonkers realm of intellectual property law, things get stupid really fast. If technology will rise to the challenge, IP owners and lawmakers seem to be engaged in an ever-escalating race to the bottom of the barrel:
In Europe, sound recordings enter the public domain 50 years after their initial release. Once that happens, anyone can reissue them, which makes it easy for Europeans to purchase classic records of the past. In America, by contrast, sound recordings are "protected" by a prohibitive snarl of federal and state legislation whose effect was summed up in a report issued in 2010 by the National Recording Preservation Board of the Library of Congress: "The effective term of copyright protection for even the oldest U.S. recordings, dating from the late 19th century, will not end until the year 2067 at the earliest.… Thus, a published U.S. sound recording created in 1890 will not enter the public domain until 177 years after its creation, constituting a term of rights protection 82 years longer than that of all other forms of audio visual works made for hire."

Among countless other undesirable things, this means that American record companies that aren't interested in reissuing old records can stop anyone else from doing so, and can also stop libraries from making those same records readily accessible to scholars who want to use them for noncommercial purposes. Even worse, it means that American libraries cannot legally copy records made before 1972 to digital formats for the purpose of preservation...
Sheer insanity. The Library of Congress appears to be on the right side of the issue, suggesting common-sense recommendations for copyright reform... that will almost certainly never be enacted by IP owners or lawmakers. Still, their "National Recording Preservation Plan" seems like a pretty good idea. Again, it's a pity that almost none of their recommendations will be enacted, and while the need for Copyright reform is blindingly obvious to anyone with a brain, I don't see it happening anytime soon. It's a sad state of affairs when the only victories we can celebrate in this realm is grassroots opposition to absurd laws like SOPA/PIPA/ACTA.

I don't know the way forward. When you look at the economics of the movie industry, as recently laid out by Steven Soderberg in a speech that's been making the rounds of late (definitely worth a watch, if you've got a half hour), you start to see why media companies are so protective of their IP. As currently set up, your movie needs to make 120 million dollars, minimum, before you start to actually turn a profit (and that's just the marketing costs - you'd have to add on the budget to get a better idea). That, too, is absurd. I don't envy the position of media companies, but on the other hand, their response to such problems isn't to fix the problem but to stomp their feet petulantly, hold on to copyrighted works for far too long, and to antagonize their best customers.

That's the irony of protecting copyright. If you protect it too much, no one actually benefits from it, not even the copyright holders...
Posted by Mark on May 29, 2013 at 10:46 PM .: link :.


End of This Day's Posts

Sunday, March 31, 2013

TV Shows I Should Probably Catch Up With
As 2013 progresses, I realize that I'm watching much less in the way of movies lately, and catching up with more television series. In terms of "appointment television", I still don't watch much, but I do like to catch up with some older seasons of good stuff, and streaming sites like Netflix are a big enabler on some of this stuff. So what are some things I should probably catch up with?
  • Breaking Bad - Everyone loves this show so damn much, but I found the first season a bit of a slog. Some high points mixed in, for sure, but it always seems to slow down and focus on certain conflicts that I find really dumb. That being said, the beginning of the second season was amazing. It's bogged down a bit again right now, but I'm sure I'll continue to make slow progress.
  • Mad Men - A show I've never been particularly interested in, but heck, it's on Netflix, so why not give it a shot sometime.
  • Doctor Who - Speaking more of the "recent" incarnation of the show, which is all available on Netflix right now. Boy, that Christopher Eccleston season sure did suck, but it started to find a groove at some point, and the second season really does pick things up. Looking forward to catching up with the these at some point. I grew up watching the old Doctor Who episodes on PBS... even if I can't remember much of those episodes, I did enjoy them.
  • Twin Peaks - Many moons ago, a friend loaned me his DVD set for Twin Peaks season one... and I started watching, only to find that... the pilot episode was missing! It was apparently some sort of legal limbo or somesuch. Well, that's all settled now, and the whole series is up on Netflix. Sign me up.
  • Arrested Development - I've seen a bunch of individual episodes of this in isolation, and probably the entire first season, but I've never really finished it off. I seem to go in chunks though, watching about 5-10 episodes in a row, then burning out and moving elsewhere for a while. But since new episodes are coming, I figure I should probably finish the series off.
  • Parks and Recreation - I watched the first season of this a while back and found it diverting enough, but I'm told that it really doesn't hit its stride until season 2 and 3, so I guess I'm in for some more of this...
  • Alias - For whatever reason, I never watched this J. J. Abrams series. Well, it's all on Netflix, so why not give it a shot? I mean, I like spy stories as much as the next guy, and Abrams seems pretty good with that sort of thing.
  • Supernatural - Last year, I watched a bunch of old X-Files episodes and I got that itch for episodic "creature of the week" type of shows, and this one seems to fit the bill nicely. Honestly, while there does seem to be some sort of overarching continuity to the series, most of these are standalone stories, which is actually kinda fun, especially when you're bogged down with a bunch of other series that are all so involved (and probably not going to pay off)...
Well, that should keep me busy for the next five years or so. I should probably go and watch some of these right now.
Posted by Mark on March 31, 2013 at 06:30 PM .: link :.


End of This Day's Posts

Wednesday, February 27, 2013

Recent and Future Podcastery
I have a regular stable of podcasts that generally keep me happy on a weekly basis, but as much as I love all of them, I will sometimes greedily consume them all too quickly, leaving me with nothing. Plus, it's always good to look out for new and interesting stuff. Quite frankly, I've not done a particularly good job keeping up with the general podcasting scene, so here's a few things I caught up with recently (or am planning to listen to in the near future):
  • Idle Thumbs - This is primarily a video game podcast, though there are some interesting satellite projects too. I have to admit that my video game playing time has reduced itself considerably in the past year or so, but I still sometimes enjoy listening to this sort of thing. Plus, the Idle Book Club is, well, exactly what it sounds like - a book club podcast, with a book a month. I've not actually listened to much of any of this stuff, but it seems like fertile ground.
  • Firewall & Iceberg Podcast - The podcast from famed television critics Alan Sepinwall and Dan Fienberg. It focuses, not surprisingly, on television shows, which is something that I've been watching more of lately (due to the ability to mainline series on Netflix, etc...) Again, I haven't heard much, but they seem pretty knowledgeable and affable. I suspect this will be one of those shows that I download after I watch a series to see what they have to say about it.
  • Film Pigs Podcast - A movie podcast that's ostensibly right in my wheelhouse, and it's a pretty fun podcast, though I'm not entirely sure how bright it's future really is at this point given that they seem to be permanently missing one member of their normal crew and publish on a bi-monthly schedule. Still, there's some fun stuff here, and I'll probably listen to more of their back catalog when I run out of my regulars...
Speaking of that regular stable, this is what it's currently looking like: There are a few others that I hit up on an inconsistent basis too, but those are the old standbys...
Posted by Mark on February 27, 2013 at 09:43 PM .: link :.


End of This Day's Posts

Sunday, December 23, 2012

Holiday Link Dump
Things are getting festive around here, so here's a few quick links for your holiday enjoyment:
  • Arnold's Very Special Xmas Party - With very special guest... Mike Tyson!? I have no idea where it came from, but this video is astounding.
  • The Twelve Days of Christmas by Colin Nissan - A more detailed account of the infamous twelve gifts. Sample:
    On the second day of Christmas my true love gave to me, two turtle doves. Wow, she’s really into the avian theme this year. Um, thank you? I guess I’ll just put them in the kitchen with the partridge and the pear tree, which suddenly seems a lot bigger than it did yesterday.
    Things get weirder and weirder as the 12 days continue. Heh.
  • Crazy Christmas Cards from 1955! - So this guy found a box filled with Christmas cards from his grandfather's failed attempt at starting a greeting card company a few years after WWII. It's an interesting story, but this card is just profoundly weird. Look at this thing:
    Weird Santa Card
    Yikes. Also kinda awesome. It's too late to order now, but he is selling them, which is pretty funny because while the whole project bankrupted his grandfather, it's probably selling pretty well these days.
  • Bing and Bowie: An Odd Story of Holiday Harmony - The backstory behind the improbable Bing Crosby and David Bowie duet "Peace on Earth/Little Drummer Boy" (which, because it's in a newspaper, doesn't actually have a link to the song, which you can watch on Youtube).
  • It's Beginning to Look Alot Like Fishmen - A Lovecraftian take on Christmas music, based on The Shadow Over Innsmouth. Heh.
That's all for now, hope you all have a happy holiday!
Posted by Mark on December 23, 2012 at 03:23 PM .: link :.


End of This Day's Posts

Wednesday, September 12, 2012

Podcastastic
Podcasts are weird. I often find myself buried under hours of great podcastery, I can barely keep up. But then every once in a while, like this past weekend, I abruptly run out of things to listen to. Oh sure, there are plenty of backup things or middling podcasts that I can fall back on, but I like to look forward to stuff too. Here are some recent podcasts that I've checked out, some great, some I'm not so sure about.
  • Radio Free Echo Rift - This has quickly joined the highest ranks of the regular rotation. Full disclosure, Mike and Don are real life friends, but they've actually put together a really well crafted podcast. They talk about comic books and movies and such, but even when they're talking about something I'm not familiar with, I find I'm usually still interested (I mean, I'm not a big comic book guy, but I still find their talks in that realm interesting). Recent highlights include a podcast discussing the typical three act structure of films, then applying that to a remake of an old semi-obscure Disney movie. Two half-hour episodes a week so far, and they also make their own comics (though none are available right now). Oh, and I'm told they'll be discussing a voicemail from me in today's podcast, so hop to it.
  • Filmspotting: Streaming Video Unit (SVU) - Filmspotting Original Recipe has long been a Kaedrin favorite, and this spinoff podcast focuses on movies available on online streaming services. The hosts are Alison Willmore and Matt Singer, whom you may recognize as the hosts of the long-defunct IFC News podcast. The format generally consists of a long review (which, since this is streaming, is never a new release and often easy to play along with), some picks to complement that movie (whether it be a genre or director or whatever), and some other streaming picks. They also do this thing where they give each other a number, and they have to tell the other what movie is that number in their Netflix Instant queue. Awesome. This is a bi-weekly podcast, but it's a solid addition to the regular rotation.
  • The Hysteria Continues - It's getting to be that time of year again - time to fire up some horror movie focused podcasts, and this one seems heavily focused on slasher films. However, these shows are enormous. Most shows are over two hours long, some even hitting three hours. Most of it isn't a discussion of the movie of the week, and I do feel like there's a little dead weight in the show, but this time of year, I'm totally down for podcasts like this.
Well, that's all for now. Happy listening. I think we'll be returning to X-Files land on Sunday (would have done so tonight, but blogging software woes over the past couple days have drained the time available)...
Posted by Mark on September 12, 2012 at 09:51 PM .: link :.


End of This Day's Posts

Sunday, July 15, 2012

What is good?
Ian Sales thinks he knows:
I've lost count of the number of times I've been told "good is subjective" or "best is subjective". Every time I hear it, it makes me howl with rage. Because it is wrong.

If there is no such thing as good - because if it's entirely subjective and personal, then it's completely useless as a descriptive term - then how do editors choose which books to publish, how do judges choose which books to give prizes to, how do academics chose which books to study? And why don't they all choose completely different books?
The irony here is that I've lost count of the number of times I've been told that "good is objective". And yet, no one seems to be able to define what constitutes good. Even Ian, despite his adamant stance, describes what is good in entirely subjective terms.
It is not an exact science, and it is subject to changes in taste and/or re-evaluation in light of changes in attitudes and sensibilities. But there are certain key indicators in fiction which can be used to determine the quality of that piece of fiction.
Having established that there are key indicators that can be used to determine quality, Sales proceeds to list... approximately none of them. Instead, he talks about "taste" and "changes in attitudes and sensibilities" (both of which are highly subjective). If it's not an "exact science", how is it objective? Isn't this an implicit admission that subjectivity plays a role? He does mention some criteria for bad writing though:
Perhaps it's easier to describe what is bad - if good is subjective, then by definition bad must be too. Except, strangely, everyone seems to agree that the following do indeed indicate that a piece of fiction is bad: cardboard cutout characters, idiot plotting, clumsy prose, tin-earred dialogue, lack of rigour, graceless info-dumping, unoriginality, bad research...
The problem with this is that most of his indicators are subjective. Some of them could contain a nugget of objectivity, notably the "bad research" piece, but others are wholly subjective. What exactly constitutes "tin-eared dialogue"? One person's cardboard cutout character is another person's fully realized and empathetic soul.

Perhaps it's my engineering background taking over, but I have a pretty high standard for objectivity. There are many objective measures of a book, but most of those aren't very useful in determining the book's quality. For instance, I can count the number of letters or words in the book. I can track the usage of punctuation or contractions. Those numbers really won't tell me much, though. I can look at word distribution and vocabulary, but then, there are a lot of classics that don't use flowery language. Simplicity sometimes trumps complexity. I can evaluate the grammar using the standards of our language, but by those measures, James Joyce and Thomas Pynchon would probably be labeled "bad" writers. For that matter, so would Ian, who's recent novella Adrift on the Sea of Rains eschews the basic grammatical convention of using quotations for dialogue. But they're not bad writers, in large part because they stray from the standards. Context is important. So that's not really that useful either.

The point of objectivity is to remove personal biases and feelings from the equation. If you can objectively measure a book, then I should be able to do the same - and our results should be identical. If we count the words in a book, we will get the same answer (assuming we count correctly). Similarly, if we're able to objectively measure a book's quality, you and I should come to the same conclusion. Now, Ian Sales has read more books than me. The guy's a writer, and he knows his craft well, so perhaps the two of us won't see eye to eye on a lot of things. But even getting two equivalently experienced people to agree on everything is a fool's errand. Critical reading is important. Not everyone that subverts grammatical conventions is doing so well or for good reason. Sometimes simplicity can be elegant, sometimes it feels clumsy. Works of art need to be put into the cultural and historical context, and thus a work should stand up to some sort of critical examination. But critical is not equivalent to objective.

Now, Ian does have an interesting point here. If what's "good" is subjective, then how is that a valuable statement?
If good is subjective, then awards are completely pointless. And studying literature, well, that's a complete waste of time too. After all, how can you be an expert in a topic in which one individual's value judgment is worth exactly the same another person's? There'd be no such thing as an expert. All books would have exactly the same artistic value.
Carried to its logical extreme, the notion that what's "good" is wholly subjective does complicate matters. I don't think I'd go quite as far as Ian did in the above referenced paragraph, but maybe he's on to something.

So far, I have mentioned a bunch of questions that Ian asked, which I will now try to give an answer to:
  • How do editors choose which books to publish? This is a pretty simple one, though I don't think that Ian will like the answer: editors choose to publish the books that they think will sell the most. To be sure, editors will also take a chance on something that could bomb... why is that? Because I think even Ian would concede that most readers are not even attempting to be objective in their purchasing habits. They buy what feel like reading. The neat thing about this one is that there actually is an objective measurement involved: sales. Now, are sales an indication of quality? Not really. But neither are most objective measurements of a book. The neat thing about sales, though, is that it's an objective measurement of the subjective tastes of a given market. There are distorting factors, to be sure (advertising, the size and composition of the market, etc...), but if you want objectivity, sales can boil the subjective response to a book down to a single number. And if an editor is bad at picking good sellers, they won't be an editor for much longer...
  • How do judges choose which books to give prizes to? My guess is that it's their subjective taste. In most cases, there isn't a single judge handing out the award, though, so we've got another case of an objective measurement of a group of people's subjective assessments. In the case of, say, the Hugo Awards, there are thousands of judges, all voting independently. There's a lot of room for fudging there. There's no guarantee that every voter read every book before casting their ballot (all you need to do to vote is to pay to be a member of the current year's Worldcon), but since there are usually around 1000 voters, the assumption is that inexperience or malice among voters is smeared into a small distortion. Other awards are chosen by small juries, one example being the Pulitzer Prize. I don't really know the inner workings of these, and I assume each award is different. I've definitely heard of small juries getting together and having a grand debate amongst themselves as to who the winner should be. The assumption with juried prizes is that the members of the jury are "experts". So if I were to be on the jury for a Science Fiction award, I should probably have extensive knowledge of Science Fiction literature (and probably general literature as well). More on this in a bit. Ultimately, an award is meant to do the same thing as revenue or sales - provide an objective assessment of the subjective opinions of a group of people.
  • How do academics chose which books to study? And why don't they all choose completely different books? I won't pretend to have any insight into what drives academia, but from what I've seen, the objective qualities they value in books seem to vary wildly. I assume we're talking about fiction here, as non-fiction probably has more objective measures than fiction.
  • How can you be an expert in a topic in which one individual's value judgment is worth exactly the same another person's? I get what he's going for with this question, but there's a pretty simple answer here. An expert in a topic will have more experience and knowledge on that topic than a non-expert. Sales has read more books than me, both within and outside of SF, and he's a writer himself. I would think of him as more of an expert than me. I'm just some guy on the internet. Unfortunately, one's expertise is probably also subjective. For instance, you can measure how many books someone's read, but comprehension and contextualization might be a little more difficult to figure out. That being said, individual experts are rarely given a lot of power, and I imagine they would suffer setbacks if they're consistently "wrong" about things. At their most important, they'll be a reviewer for a large newspaper or perhaps a jury member. In both cases, their opinions are smeared across a bunch of other people's thoughts.
The common thread between all of these things is that there's a combination of objective and subjective measurements. At some point in his post, Sales sez that objective measurement of what is good is "why some books are still in print two hundred years after they were first published." That's something I think we'd all like to believe, but I don't know how true that is... I wonder what books from today will still be in print in 200 years (given the nature of current technology, that might get tricky, but let's say I wonder what books will be relevant and influential in 200 years)? There's a school of thought that thinks it will be the high literary stuff discussed by academics. Another school of thought thinks it will be best-selling populist stuff like Stephen King. I don't think it's that easy to figure out. There's an element of luck or serendipity (whatever you want to call it) that I think plays into this, and that I think we're unlikely to predict. Why? Because it's ultimately a subjective enterprise.

We can devise whatever measurements we want, we can come up with statistical sampling models that will take into account sales and votes and prizes and awards and academic praise and journal mentions, whatever. I actually find those to be interesting and fun exercises, but they're just that. They ultimately aren't that important to history. I'd bet that the things from our era that are commonly referenced 200 years from now would seem horribly idiosyncratic and disjointed to us...

Sales concludes with this:
If you want to describe a book in entirely subjective terms, then tell people how much you enjoyed it, how much you liked it. That's your own personal reaction to it. It appealed to you, it entertained you. That's the book directly affecting you. Another person may or may not react the same way, the book might or might not do the same to them.

Because that's subjective, that is.
He's not wrong about that. Enjoyment is subjective. But if we divorce the concept of "good" from the concept of "enjoyment", what are we left with? It's certainly a useful distinction to make at times. There are many things I "like" that I don't think are particularly "good" on any technical level. I'm not saying that a book has to be "enjoyable" to be "good", but I don't think they're entirely independent either. There are many ways to measure a book. For the most part, in my opinion, the objective ones aren't very useful or predictive by themselves. You could have an amazingly well written book (from a prose standpoint) put into service of a poorly plotted story, and then what? On the other hand, complete subjectivity isn't exactly useful either. You fall into the trap that Ian lays out: if everything is entirely subjective, then there is no value in any of it. That's why we have all these elaborate systems though. We have markets that lead to sales numbers, we have awards (with large or small juries, working together or sometimes independently), we have academics, we have critics, we have blogs, we have reviews, we have friends whose opinions we trust, we have a lot of things we can consider.

In chaos theory, even simple, orderly systems display chaotic elements. Similarly, even the most chaotic natural systems have some sort of order to them. This is, of course, a drastic simplification. One could argue that the universe is headed towards a state of absolute entropy; the heat death of the universe. Regardless of the merits of this metaphor, I feel like the push and pull of objectivity is similar. Objective assessments of novels that are useful will contain some element of subjectivity. Similarly, most subjective assessments will take into account objective measurements. In the end, we do our best with what we've got. That's my opinion, anyway.
Posted by Mark on July 15, 2012 at 07:05 PM .: link :.


End of This Day's Posts

Wednesday, June 27, 2012

Peak Performance
A few years ago, Malcolm Gladwell wrote an article called How David Beats Goliath, and the internets rose up in nerdy fury. Like a lot of Gladwell's work, the article is filled with anecdotes (whatever you may think of Gladwell, he's a master of anecdotes), most of which surround the notion of a full-court press in basketball. I should note at this point that I absolutely loath the sport of basketball, so I don't really know enough about the mechanics of the game to comment on the merits of this strategy. That being said, the general complaint about the article is that Gladwell chose two examples that aren't really representative of the full-court press. The primary example seems to be a 12 year old girls basketball team, coached by an immigrant unfamiliar with the game:
Ranadive was puzzled by the way Americans played basketball. He is from Mumbai. He grew up with cricket and soccer. He would never forget the first time he saw a basketball game. He thought it was mindless. Team A would score and then immediately retreat to its own end of the court. Team B would inbound the ball and dribble it into Team A's end, where Team A was patiently waiting. Then the process would reverse itself. A basketball court was ninety-four feet long. But most of the time a team defended only about twenty-four feet of that, conceding the other seventy feet. Occasionally, teams would play a full-court press—that is, they would contest their opponent's attempt to advance the ball up the court. But they would do it for only a few minutes at a time. It was as if there were a kind of conspiracy in the basketball world about the way the game ought to be played, and Ranadive thought that that conspiracy had the effect of widening the gap between good teams and weak teams. Good teams, after all, had players who were tall and could dribble and shoot well; they could crisply execute their carefully prepared plays in their opponent's end. Why, then, did weak teams play in a way that made it easy for good teams to do the very things that made them so good?
The strategy apparently worked well, to the point where they made it to the national championship tournament:
The opposing coaches began to get angry. There was a sense that Redwood City wasn't playing fair - that it wasn't right to use the full-court press against twelve-year-old girls, who were just beginning to grasp the rudiments of the game. The point of basketball, the dissenting chorus said, was to learn basketball skills. Of course, you could as easily argue that in playing the press a twelve-year-old girl learned something much more valuable - that effort can trump ability and that conventions are made to be challenged.
Most of the criticism of this missed the forest for the trees. A lot of people nitpicked some specifics, or argued as if Gladwell was advocating for all teams playing a press (when he was really just illustrating a broader point that underdogs don't always need to play by the stronger teams' conventions). One of the most common complaints was that "the press isn't always an advantage" which I'm sure is true, but again, it kinda misses the point that Gladwell was trying to make. Tellingly, most folks didn't argue about Gladwell's wargame anecdote, though you could probably make similar nitpicky arguments.

Anyway, the reason I'm bringing this up three years after the fact is not to completely validate Gladwell's article or hate on his critics. As I've already mentioned, I don't care a whit about basketball, but I do think Gladwell has a more general point that's worth exploring. Oddly enough, after recently finishing the novel Redishirts, I got an itch to revisit some Star Trek: The Next Generation episodes and rediscovered one of my favorite episodes. Oh sure, it's not one of the celebrated episodes that make top 10 lists or anything, but I like it nonetheless. It's called Peak Performance, and it's got quite a few parallels to Gladwell's article.

The main plot of the episode has to do with a war simulation exercise in which the Enterprise engages in a mock battle with an inferior ship (with a skeleton crew lead by Commander Riker). There's an obvious parallel here between the episode and Gladwell's article (when asked how a hopelessly undermatched ship can compete with the Enterprise, Worf responds "Guile."), but it's the B plot of the episode that is even more relevant (the main plot goes in a bit of a different direction due to some meddling Ferengi).

The B plot concerns the military strategist named Kolrami. He's acting as an observer of the exercise and he's arrogant, smarmy, and condescending. He's also a master at Strategema, one of Star Trek's many fictional (and nonsensical) games. Riker challenges this guy to a match because he's a glutton for punishment (this really is totally consistent with his character) - he just wants to say that he played the master, even if he lost... which, of course, he does. Later, Dr. Pulaski volunteers Data to play a game, with the thought being that the android would easily dispatch Kolrami, thus knocking him down a peg. But even Data loses.

Data is shaken by the loss. He even removes himself from duty. He expected to do better. According to the rules, he "made no mistakes", and yet he still lost. After analyzing his failure and discussing the matter with the captain (who basically tells Data to shut up and get back to work), Data resumes his duty, eventually even challenging Kolrami to a rematch. But this time, Data alters his premise for playing the game. "Working under the assumption that Kolrami was attempting to win, it is reasonable to assume that expected me to play for the same goal." But Data wasn't playing to win. He was playing for a stalemate. Whenever opportunities for advancement appeared, Data held back, attempting to maintain a balance. He estimated that he should be able to keep the game going indefinitely. Frustrated by Data's stalling, Kolrami forfeits in a huff.

There's an interesting parallel here. Many people took Gladwell's article to mean that he thought the press was a strategy that should be employed by all teams, but that's not really the point. The examples he gave were situations in which the press made sense. Similarly, Data's strategy of playing for stalemate was uniquely suited to him. The reason he managed to win was that he is an android without any feelings. He doesn't get frustrated or bored, and his patience is infinite. So while Kolrami may have technically been a better player, he was no match for Data once Data played to his own strengths.

Obviously, quoting fiction does nothing to bolster Gladwell's argument, but I was struck by the parallels. One of the complaints to Gladwell's article that rang at least a little true was that the article's overarching point was "so broad and obvious as to be not worth writing about at all." I don't know that I fully buy that, as a lot of great writing can ultimately be boiled down to something "broad and obvious", but it's a fair point. On the other hand, even if you think that, I do find that there's value in highlighting examples of how it's done, whether it's a 12 year old girls basketball team, or a fictional android playing a nonsensical (but metaphorically apt) game on a TV show. It seems that human beings sometimes need to be reminded that thinking outside the box is an option.
Posted by Mark on June 27, 2012 at 09:34 PM .: link :.


End of This Day's Posts

Wednesday, May 02, 2012

Tweets of Glory
One of the frustrating things about Twitter is that it's impossible to find something once it's gone past a few days. I've gotten into the habit of favoriting ones I find particularly funny or that I need to come back to, which is nice, as it allows me to publish a cheap Wednesday blog entry (incidentally, sorry for the cheapness of this entry) that will hopefully still be fun for folks to read. So here are some tweets of glory:




Note: This was Stephenson's first tweet in a year and a half.

This one is obviously a variation on a million similar tweets (and, admit it, it's a thought we've all had), but the first one I saw (or at least, favorited - I'm sure it's far from the first time someone made that observation though)



Well, that happened. Stay tuned for some (hopefully) more fulfilling content on Sunday...
Posted by Mark on May 02, 2012 at 08:36 PM .: link :.


End of This Day's Posts

Wednesday, April 25, 2012

Recent Podcastery
I like podcasts and listen to many different ones, but it seems that the ones that I actually look forward to are few and far between. Here are a few recent additions to the rotation:
  • Extra Hot Great - This has been my favorite recent discovery, and over the past couple months, I think I've burned my way through their entire archive (80 episodes, plus a crapton of "Mini" episodes). Great personalities and commentary, a solid format with some inventive segments, and plenty of fun. A typical episode starts with a quick discussion of a recent TV series or movie (incidentally, tons of spoilers, so be forewarned), followed by some miscellaneous segments (my favorites being "I am not a crackpot" where people lay out their crackpot ideas, and "The most awesome thing I saw on television this week" in which Kim Reed gives a hysterical plot summary of the most ridiculous shows that she apparently watches a lot of), and then The Canon, in which someone presents a single television episode for induction into the Extra Hot Great Canon. The Canon is a surprisingly well rounded affair, with lots of variety and really in-depth discussions. The folks on the podcast are actually quite discerning in their judgement, and it's always interesting listening. Each podcast ends with a "Game Time" segment, during which you realize that these people know way more about television than you (or, well, me). It's more television focused than my usual preferred podcasts, but I love it anyway. Very fun and interesting stuff. Highly recommended!
  • Onion AV Club Reasonable Discussions - The Onion somewhat recently revamped their podcast and it was really great. They discuss music, movies, and television, and they're usually pretty insightful folks. They don't quite have a big format like Extra Hot Great, but it's still an interesting podcast. Alas, they seem to be on something of a hiatus right now (no podcast in about a month). I hope they do bring it back though, as it was solid.
  • Slate's Culture Gabfest - I think this might be the most pretentious thing I have ever heard, but it's actually pretty approachable, even if they sometimes let loose with a massive wave of elitist snobbery from time to time. I probably disagree with them more often than not, but they tend to tackle interesting subjects from week to week. Another podcast without formally defined segments, but they usually have three culturally significant things to discuss, and end every episode with an "endorsement" of something they enjoyed during that week.
That's all for now....
Posted by Mark on April 25, 2012 at 10:19 PM .: link :.


End of This Day's Posts

Wednesday, April 18, 2012

Link Dump
I'm gonna be taking a trip to The Cabin in The Woods tonight, so time is sparse, thus some linkys for you:
  • In Defense of Microsoft Word - Aziz makes a nice argument in response to incessant whinging on the internets:
    It’s certainly true that using Word for simple text like email or blog posts is overkill, in much the same way that using a jet engine to drive your lawnmower is overkill. What’s peculiar is that rather than using simpler tools for their simpler tasks, these people have declared that the more complex and capable tool is "obsolete" and "must die". This attitude betrays a type of phobia towards technology that I suspect has grown more prevalent as our technology interfaces have become increasingly more "dumbed down".
    I mostly agree with Aziz. While I haven't used Word (or a Word processor in general) in my personal life in years, I use it every day at work, and the notion that you can't use Word to collaborate is bonkers. It may not be the best tool for that, but it's certainly not something that needs to die. An interesting post...
  • Books: Bits vs. Atoms - Those who have enjoyed my recent bloviating about ebooks will probably get a kick out of this... better organized... take on the subject (that being said, we cover a lot of the same ground).
  • What Amazon's ebook strategy means - Speaking of ebooks, Charlie Stross clearly lays out why Amazon is dominating the ebook market, how the publishers shot themselves in the foot by practically insisting that Amazon dominate the market, why it's a bad situation to be in, and how publishers can take some steps in the right direction. Hint: get rid of DRM, you dummies! There's a lot of lawsuits and wanking in the book and ebook industry right now, and it's tempting to take sides with Amazon or the publishers or Apple or whoever, but the more I read about it, the more I think that everyone is to blame. So far, this hasn't really impacted us consumers that much, but it certainly could. Here's to hoping these folks get their heads bolted on straight in the near future.
  • Neal Stephenson has a hard time talking about swordplay - Normally I find "trailers" for books to be mildly embarrassing (the trailer for Stephenson's Anathem is a particularly bad example), but this one is pretty funny. No idea how much of it will be represented in the forthcoming paperback release of The Mongoliad, but still.
  • Gabe's PAX Post - Gabe from Penny Arcade helps run huge video game conventions that are explicitely targeted towards players (most conventions are about general technology or development, and are targeted towards journalists or developers). As one of the creators and organizers, Gabe has to deal with all sorts of crap, and he covers a few of these, including a little prank he played on a troll, and a vexing problem concerning boobies (aka the perennial Booth Babe issue). Read the whole thing, but the key graph is this:
    How about all of you that hate me get together and have your own conference. I need you to decide if half naked girls are empowered or exploited because I’m doing my fucking best here and it’s apparently always wrong. I swear to God I don’t understand how I’m supposed to know if I’m promoting the patriarchy or criminalizing the female body.
    As Steven notes, this is a cry for help. I wish I had answers, but fortunately, I'm not in Gabe's position. I can just treat people equally and be happy with that.
That's all for now. Also, go Flyers.
Posted by Mark on April 18, 2012 at 07:09 PM .: link :.


End of This Day's Posts

Sunday, April 15, 2012

Kickstarted
When the whole Kickstarter thing started, I went through a number of phases. First, it's a neat idea and it leverages some of the stuff that makes the internet great. Second, as my systems analyst brain started chewing on it, I had some reservations... but that was shortlived as, third, some really interesting stuff started getting funded. Here are some of the ones I'm looking forward to:
  • Singularity & Co. - Save the SciFi! - Yeah, so you'll be seeing a lot of my nerdy pursuits represented here, and this one is particularly interesting. This is a project dedicated to saving SF books that are out of print, out of circulation, and, ironically, unavailable in any sort of digital format. The Kickstarter is funding the technical solution for scanning the books as well as tracking down and securing copyright. Judging from the response (over $50,000), this is a venture that has found a huge base of support, and I'm really looking forward to discovering some of these books (some of which are from well known authors, like Arthur C. Clarke).
  • A Show With Ze Frank - One of the craziest things I've seen on the internet is Ze Frank's The Show. Not just the content, which is indeed crazy, but the sheer magnitude of what he did - a video produced every weekday for an entire year. Ze Frank grew quite a following at the time, and in fact, half the fun was his interactions with the fans. Here's to hoping that Sniff, hook, rub, power makes another appearance. And at $146 thousand, I have no idea what we're in for. I always wondered how he kept himself going during the original show, but now at least he'll be funded.
  • Oast House Hop Farm - And now we come to my newest obsession: beer. This is a New Jersey farm that's seeking to convert a (very) small portion of their land into a Hop Farm. Hops in the US generally come from the west coast (Washington's Yakima valley, in particular). In the past, that wasn't the case, but some bad luck (blights and infestations) brought east coast hops down, then Prohibition put a nail in the coffin. The farm hopes to supply NJ brewers as well as homebrewers, so mayhaps I'll be using some of their stuff in the future! So far, they've planted Cascade and Nugget hops, with Centennial and Newport coming next. I'm really curious to see how this turns out. My understanding is that it takes a few years for a hop farm to mature, and that each crop varies. I wonder how the East Coast environs will impact the hops...
  • American Beer Blogger - Despite the apparent failure of Discovery's Brewmasters, there's got to be room for some sort of beer television show, and famous beer blogger and author Lew Bryson wants to give it a shot. The Kickstarter is just for the pilot episode, but assuming things go well, there may be follow up efforts. I can only hope it turns out well. I enjoyed Brewmasters for what it was, but being centered on Dogfish Head limited it severely. Sam Calagione is a great, charismatic guy, but the show never really captured the amazing stuff going on in the US right now (which is amazing because it is so broad and local and a million other things Brewmasters couldn't really highlight given its structure).
Well, there you have it. I... probably should have been linking to these before they were funded, but whatever, I'm really happy to see that all of these things will be coming. I'm still curious to see if this whole Kickstarter thing will remain sustainable, but I guess time will tell, and for now, I'm pretty happy with the stuff being funded. There are definitely a ton of other campaigns that I think are interesting, especially surrounding beer and video games, but I'm a little tight on time here, so I'll leave it at that...
Posted by Mark on April 15, 2012 at 08:28 PM .: link :.


End of This Day's Posts

Wednesday, August 17, 2011

More on Spoilers
I recently wrote about the unintended consequences of spoiler culture, and I just came across this post which has been making waves around the internets. That post points to a study which concluded that readers actually like to have a story "spoiled" before they start reading.
The U.C. San Diego researchers, who compiled this chart showcasing the spoiler ratings of three genres (ironic twist stories, mysteries or literary stories), posited this about their findings: "once you know how it turns out, it’s cognitively easier - you’re more comfortable processing the information - and can focus on a deeper understanding of the story."
Jonah Lehrer apparently goes so far as to read the last 5 pages of the novels he reads, just so he has an idea where the story's headed. He clearly approves of the research's conclusions, and makes a few interesting observations, including:
Surprises are much more fun to plan than experience. The human mind is a prediction machine, which means that it registers most surprises as a cognitive failure, a mental mistake. Our first reaction is almost never “How cool! I never saw that coming!” Instead, we feel embarrassed by our gullibility, the dismay of a prediction error. While authors and screenwriters might enjoy composing those clever twists, they should know that the audience will enjoy it far less.
Interestingly, a few years ago, I posted about this conundrum from the opposite end. Author China Miéville basically thinks it's extremely difficult, maybe even impossible, to write a crime story or mystery with a good ending:
Reviews of crime novels repeatedly refer to this or that book’s slightly disappointing conclusion. This is the case even where reviewers are otherwise hugely admiring. Sometimes you can almost sense their bewilderment when, looking closely at the way threads are wrapped up and plots and sub-plots knotted, they acknowledge that nothing could be done to improve an ending, that it works, that it is ‘fair’ (a very important quality for the crime aficionado - no last-minute suspects, no evidence the reader hasn’t seen), that it is well-written, that it surprises… and yet that it disappoints.

The reason, I think, is that crime novels are impossible. Specifically, impossible to end.
There's a lot to parse out above, but I have two thoughts on the conclusions raised by the original study. First is that there may actually be something to the cognitive benefits theory of why people like this. The theory and methodology of interpretation of text is referred to as hermeneutics*. This is a useful field because language, especially figurative language, is often obscure and vague. For example, in the study of religious writings, it is often found that they are written in a certain vernacular and for a specific audience. In order to truly understand said writings, it is important to put them in their proper cultural and historical context. You can't really do that without knowing what the text says in the first place.

This is what's known as the hermenutic circle. It's kinda like the application of science to interpretation. Scientists start by identifying a problem, and they theorize the answer to that problem. In performing and observing their experiment to test the problem, they gain new insights which must then be used to revise their hypothesis. This is basically a hermeneutic circle. To apply it to the situation at hand: When reading a book, we are influenced by our overall view of the book's themes. But how are we to know the book's themes as a whole if we have not yet finished reading the parts of the book? We need to start reading the book with our own "pre-understanding", from which we hypothesize a main theme for the whole book. After we finish reading the book, we go back to each individual chapter with this main theme in mind to get a better understanding of how all the parts relate to the whole. During this process, we often end up changing our main theme. With the new information gained from this revision, we can again revise our main theme of the book, and so on, until we can see a coherent and consistent picture of the whole book. What we get out of this hermeneutic circle is not absolute and final, but it is considered to be reasonable because it has withstood the process of critical testing.

This process in itself can be fulfilling, and it's probably why folks like Jonah Lehrer don't mind spoilers - it gives them a jump start on the hermeneutic circle.

Second, the really weird thing about this study is that it sorta misses the point. As Freddie points out:
The whole point of spoilers is that they're unchosen; nobody really thinks that there's something wrong with people accessing secrets and endings about art they haven't yet consumed. What they object to is when spoilers are presented in a way that an unsuspecting person might unwittingly read them. The study suggests that people have a preference for knowing the ending, but preference involves choice. You can't deliberately act on a preference for foreknowledge of plot if you are presented the information without choosing to access it.
And that's really the point. Sometimes I don't mind knowing the twist before I start watching/reading something, but there are other times when I want to go in completely blind. Nothing says that I have to approach all movies or books (or whatever) exactly the same way, every time. And context does matter. When you see a movie without knowing anything about it, there can be something exhilarating in the discovery. That doesn't mean I have to approach all movies that way, just that the variety is somethings a good thing.

* - Yeah, I plundered that entry that I wrote for everything2 all those years ago pretty heavily. Sue me.
Posted by Mark on August 17, 2011 at 06:03 PM .: link :.


End of This Day's Posts

Wednesday, July 27, 2011

Recent Podcastery
I like podcasts, but it's depressingly hard to find ones that I really enjoy and which are still regularly published. I tend to discover a lot of podcasts just as they're going through their death throes. This is sometimes ok, as I'm still able to make my way through their archives, but then I run out of content and have to start searching for a new podcast. I will often try out new podcasts, but I have only added a few to the rotation of late. Here's some recent stuff I've been listening to:
  • The /Filmcast - I tried this podcast out a few years ago and my recollection is that I found it kinda boring. I don't know what was going on during that episode though, because I find that this is the podcast I most look forward to every week. I enjoy the format, which starts with a "what we've been watching" segment, followed by a short "movie news" segment, and then an in-depth review of a relatively new release. And when I say "in-depth", I mean very long and detailed, often in the 40-60 minute range. It's also one of the few podcasts to really get into spoilers of a new release (they are very clear about when they start the spoiler section, so no worries if you haven't seen the movie). It's something most reviews and podcasts avoid, but it's actually quite entertaining to listen to (if, that is, you've already seen the film or don't care about the film in question). Also noteworthy is that the show features 3 regular hosts, and a guest host - and the guests are usually fantastic. They're mostly other film critics, but occasionally they'll have actual actors or directors on the show as well - people like Rian Johnson (of Brick and Brothers Bloom fame) and Vincenzo Natali (of Cube and Splice fame). What's more, they don't have these guests on to just interview them - they make them participate in the general format of the show - so you get to see what Rian Johnson has been watching that week or what he thinks of various movie news, etc... It's a really unusual perspective to get on these directors, and it's stuff you rarely get in an interview. So yeah, if you like movies (and television, which they often discuss in the first segment and after dark shows), this is a must-listen podcast.
  • The Jeff Rubin Jeff Rubin Show - No, that's not a typo, but don't ask me why he's repeated his name either. I don't really get it. But I do really like the show so far. This is the only relatively new show that I listen to, and so far, it's been great. You may recongnize Rubin from his work at CollegeHumor, such as the great video series, Bleep Bloop and Nerd Alert. In this podcast, he basically interviews someone in each show. So far, we've got an interview with Anamanaguchi (a band that uses old Nintendos as an instrument), a discussion of Game of Thrones with another CollegeHumor guy, Jon Gabrus, a completely awesome interview of a guy that runs pizza tours in NY, and an interview with the guy responsible for writing/directing all those porn parodies that have been coming out lately (brilliant). I have to wonder how well he can keep up the quality of his guests and the variety of topics, but so far, so good.
  • Rebel FM - Video Game podcasts are weird. They often spend a ton of time talking about new or upcoming games that you can't play yet, which is kinda annoying. It's also hard to go back and find an episode where they talked about x or y game (and usually the discussions aren't that enlightening because they're just talking about the mechanics of the game). IRebel FM falls into this category a bit, but what sets it apart is their letters section, which isn't really anything special, but which can be a lot of fun. Somehow, they've become known for giving out sagely advice on relationships and other life challenges. It's just funny to see this sort of thing through the lens of a video game podcast.
  • All Beers Considered - I haven't done a lot of exploring around the beer podcast realm, but I like the Aleheads website, so I tend to listen to these podcasts which generally cover various beer news stories and whatnot. It's not something I'd recommend to someone who's not a beer fanatic, but, well, I am a beer fanatic, so I like it.
  • Basic Brewing Radio - This seems to be THE homebrewing podcast, and it's got a massive archive filled with great stuff (at least, I've found many episodes to be helpful in my brewing efforts). Some stuff works better than others (really, it's kinda strange to listen to a beer tasting, especially of homebrew that you'll never get to try), but there's lots of good stuff for new brewers in the archives.
  • The Adventurenaut Cassettes - There's no real explaining this podcast. It's just really weird, disjointed and almost psychadelic. Good when you're in a certain mood, though.
I really only have 3 or 4 shows that I really look forward to every week, but I'm always looking for more...
Posted by Mark on July 27, 2011 at 10:01 PM .: link :.


End of This Day's Posts

Sunday, July 10, 2011

Flow and Games
When I read a book, especially a non-fiction book, I usually find myself dog-earing pages with passages I find particularly interesting or illuminating. To some book lovers, I'm sure this practice seems barbaric and disrespectful, but it's never really bothered me. Indeed, the best books are the ones with the most dog-ears. Sometimes there are so many dog-ears that the width of the book is distorted so that the top of the book (which is where the majority of my dog-ears go) is thicker than the bottom. The book Flow, by Mihaly Csikszentmihalyi1 is one such book.

I've touched on this concept before, in posts about Interrupts and Context Switching and Communication. This post isn't a direct continuation of that series, but it is related. My conception of flow in those posts is technically accurate, but also imprecise. My concern was mostly focused around how fragile the state of flow can be - something that Csikszentmihalyi doesn't spend much time on in the book. My description basically amounted to a state of intense concentration. Again, while technically accurate, there's more to it than that, and Csikszentmihalyi equates the state with happiness and enjoyment (from page 2 of my edition):
... happiness is not something that happens. It is not the result of good fortune or random chance. It is not something that money can buy or power command. It does not depend on outside events, but, rather, on how we interpret them. Happiness, in fact, is a condition that must be prepared for, cultivated, and defended privately by each person. People who learn to control inner experience will be able to determine the quality of their lives, which is as close as any of us can come to being happy.

Yet we cannot reach happiness by consciously searching for it. "Ask yourself whether you are happy," said J.S. Mill, "and you cease to be so." It is by being fully involved with every detail of our lives, whether good or bad, that we find happiness, not by trying to look for it directly.
In essence, the world is a chaotic place, but there are times when we actually feel like we have achieved some modicum of control. When we become masters of our own fate. It's an exhilarating feeling that Csikszentmihalyi calls "optimal experience". It can happen at any time, whether external forces are favorable or not. It's an internal condition of the mind. One of the most interesting things about this condition is that it doesn't feel like happiness when it's happening (page 3):
Contrary to what we usually believe, moments like these, the best moments of our lives, are not the passive, receptive, relaxing times - although such experiences can also be enjoyable, if we have worked hard to attain them. The best moments usually occur when a person's body or mind is stretched to its limits in a voluntary effort to accomplish something difficult and worthwhile. Optimal experience is thus something that we make happen. For a child, it could be placing with trembling fingers the last block on a tower she has built, higher than any she has built so far; for a swimmer, it could be trying to beat his own record; for a violinist, mastering an intricate musical passage. For each person there are thousands of opportunities, challenges to expand ourselves.

Such experiences are not necessarily pleasant at the time they occur. The swimmer's muscles might have ached during his most memorable race, his lungs might have felt like exploding, and he might have been dizzy with fatigue - yet these could have been the best moments of his life. Getting control of life is never easy, and sometimes it can be definitely painful. But in the long run optimal experiences add up to a sense of mastery - or perhaps better, a sense of participation in determining the content of life - that comes as close to what is usually meant by happiness as anything else we can conceivably imagine.
This is an interesting observation. The best times of our lives are often hectic, busy, and frustrating while they're happening, and yet the feeling of satisfaction we get after-the-fact seems worth the effort. Interestingly, since Flow is a state of mind, experiences that are normally passive can become a flow activity through taking a more active role. Csikszentmihalyi makes an interesting distinction between "pleasure" and "enjoyment" (page 46):
Experiences that give pleasure can also give enjoyment, but the two sensations are quite different. For instance, everyone takes pleasure in eating. To enjoy food, however, is more difficult. A gourmet enjoys eating, as does anyone who pays enough attention to a meal so as to discriminate the various sensations provided by it. As this example suggests, we can experience pleasure without any investment of psychic energy, whereas enjoyment happens only as a result of unusual investments of attention. A person can feel pleasure without any effort, if the appropriate centers in his brain are electrically stimulated, or as a result of the chemical stimulation of drugs. But it is impossible to enjoy a tennis game, a book, or a conversation unless attention is fully concentrated on the activity.
As someone who watches a lot of movies and reads a lot of books, I can definitely see what Csikszentmihalyi is saying here. Reading a good book will not always be a passive activity, but a dialogue2. Rarely do I accept what someone has written unconditionally or without reserve. For instance, in the passage above, I remember thinking about how arbitrary Csikszentmihalyi's choice of terms was - would the above passage be any different if we switched "pleasure" and "enjoyment"? Ultimately, that doesn't really matter. Csikszentmihalyi's point is that there's a distinction between hedonistic, passive experiences and complex, active experiences.

There is, of course, a limit to what we can experience. In a passage that is much more concise than my post on Interrupts and Context Switching, Csikszentmihalyi expands on this concept:
Unfortunately, the nervous system has definite limits on how much information it can process at any given time. There are just so many "events" that can appear in consciousness and be recognized and handled appropriately before they begin to crowd each other out. Walking across a room while chewing bubble gum at the same time is not too difficult, even though some statesmen have been alleged to be unable to do it; but, in fact, there is not that much more that can be done concurrently. Thoughts have to follow each other, or they get jumbled. While we are thinking about a problem we cannot truly experience either happiness or sadness. We cannot run, sing, and balance the checkbook simultaneously, because each one of those activities exhausts most of our capacity for attention.
In other words, human beings are kinda like computers in that we execute instructions in a serial fashion, and things like context switches are quite disruptive to the concept of optimal experience3.

Given all of the above, it's easy to see why there isn't really an easy answer about how to cultivate flow. Csikszentmihalyi is a psychologist and is thus quite careful about how he phrases these things. His research is extensive, but necessarily imprecise. Nevertheless, he has identified eight overlapping "elements of enjoyment" that are usually present during flow. Through his extensive interviews, he has noticed at least a few of these major components come up whenever someone discusses a flow activity. A quick summary of the components (pages 48-67):
  • A Challenging Activity that Requires Skills - This is pretty self explanatory, but it should also be noted that "challenging" does not mean "impossible". We need to confront tasks which push our boundaries, but which we also actually have a chance of completing.
  • The Merging of Action and Awareness - When all of our energy is concentrated on the relevant stimuli. This is related to some of the below components.
  • Clear Goals and Feedback - These are actually two separate components, but they are interrelated and on a personal level, I feel like these are the most important of the components... or at least, one of the most difficult. In particular, accurate feedback and measurement are much more difficult than they sound. Sure, for some activities, they're simple and easy, but for a lot of more complex ones, the metrics either don't exist or are too obtuse. This is something I struggle with in my job. There are certain metrics that are absolute and pretty easy to track, but there are others that are more subjective and exceedingly difficult to quantify.
  • Concentration on the Task at Hand - Very much related to the second point above, this particular component is all about how that sort of intense concentration removes from awareness all the worries and frustrations of everyday life. You are so focused on your task that there is no room in your mind for irrelevant information.
  • The Paradox of Control - Enjoyable experiences allow people to exercise a sense of control over their actions. To look at this another way, you could see it as a lack of worry about losing control. The paradox comes into play because this feeling is somewhat illusory. What's important is the "possibility, rather than the actuality, of control."
  • The Loss of Self-Consciousness - Again related to a couple of the above, this one is about how when you're involved in flow, concern about the self disappears. Being so engrossed in a project or a novel or whatever that you forget to eat lunch, and things along those lines. Interestingly, this sort of thing eventually does lead to a sense of self that emerges stronger after the activity has ended.
  • The Transformation of Time - The sense of duration of time is altered. Hours pass by in minutes, or conversely, minutes pass by in what seem like hours. As Einstein once said: "Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. THAT'S relativity."
So what are the implications of all this? There were a few things that kept coming to mind while reading this book.

First, to a large extent, I think this helps explain why video games are so popular. Indeed, many of the flow activities in the book are games or sports. Chess, swimming, dancing, etc... He doesn't mention video games specifically, but they seem to fit the mold. Skills are certainly involved in video games. They require concentration and thus often lead to a loss of self-consciousness and lack of awareness of the outside world. They cause you to lose track of time. They permit a palpable sense of control over their digital environment (indeed, the necessity of a limited paradigm of reality is essential to video games, which lends the impression of control and agency to the player). And perhaps most importantly, the goals are usually very clear and the feedback is nearly instantaneous. It's not uncommon for people to refer to video games in terms of addiction, which brings up an interesting point about flow (page 70):
The flow experience, like everything else, is not "good" in an absolute sense. It is good only in that it has the potential to make life more rich, intense, and meaningful; it is good because it increases the strength and complexity of the self. But whether the consequences of any particular instance of flow is good in a larger sense needs to be discussed and evaluated in terms of more inclusive social criteria. The same is true, however, of all human activities, whether science, religion, or politics.
Flow is value neutral. In the infamous words of Buckethead, "Like the atom, the flyswatter can be a force for great good or great evil." So while video games could certainly be a flow activity, are they a good activity? That is usually where the controversy stems from. I believe the flow achieved during video game playing to be valuable, but I can also see why some wouldn't feel that way. Since flow is an internal state of the mind, it's difficult to observe just how that condition is impacting a given person.

Another implication that kept occurring to me throughout the book is what's being called "The gamification of everything". The idea is to use the techniques of game design to get people interested in what are normally non-game activities. This concept is gaining traction all over the place, but especially in business. For example, Target encouraged their cashiers to speed up checkout of customers by instituting a system of scoring and leaderboards to give cashiers instant feedback. In the book, Csikszentmihalyi recounts several examples of employees in seemingly boring jobs, such as assembly lines, who have turned their job from a tedious bore to a flow activity thanks to measurement and feedback. There are a lot of internet startups that use techniques from gaming to enhance their services. Many use an awards system with points and leaderboards. Take FourSquare, with its badges and "Mayorships", which turns "going out" (to restaurants, bars, and other commercial establishments) into a game. Daily Burn uses game mechanics to help people lose weight. Mint.com is a service that basically turns personal finance into a game. The potential examples are almost infinite4.

Again, none of this is necessarily a "good" thing. If Target employees are gamed into checking out faster, are they sacrificing accuracy in the name of speed? What is actually gained by being the "mayor" of a bar in Foursquare? Indeed, many marketing schemes that revolve around the gamification of everything are essentially ways to "trick" customers or "exploit" psychology for profit. I don't really have a problem with this, but I do think it's an interesting trend, and its basis is the flow created by playing games.

On a more personal note, one thing I can't help but notice is that my latest hobby of homebrewing beer seems, at first glance, to be a poor flow activity. Or, at least, the feedback part of the process is not very good. When you brew a beer, you have to wait a few weeks after brew day to bottle or keg your beer, then you have to wait some time after that (less if you keg) before you can actually taste the beer to see how it came out (sure, you can drink the unfermented wort or the uncarbonated/unconditioned beer after primary fermentation, but that's not an exact measurement, and even then, you have to wait long periods of time). On the other hand, flow is an internal state of mind. The process of brewing the beer in the first place has many places for concentration and smaller bits of feedback. When I thought about it more, I feel like those three hours are, in themselves, something of a flow activity. The fact that I get to try it a few weeks/months later to see how it turned out is just an added bonus. Incidentally, the saison I brewed a few weeks ago? It seems to have turned out well - I think it's my best batch yet.

In case you can't tell, I really enjoyed this book, and as longwinded as this post turned out, there's a ton of great material in the book that I'm only touching on. I'll leave you with a quite that seems to sum things up pretty well (page 213): "Being in control of the mind means that literally anything that happens can be a source of joy."

1 - I guess it's a good thing that I'm writing this as opposed to speaking about it, as I have no idea how to pronounce any part of Mihaly Csikszentmihalyi's name.

2 - Which is not to take away the power of books or movies where you sit down, turn your brain off, and veg out for a while. Hey, I think True Blood is coming on soon...

3 - This is, of course, a massive simplification of a subject that we don't even really understand that well. My post on Interrupts and Context Switching goes into more detail, but even that is lacking in a truly detailed understanding of the conscious mind.

4 - I have to wonder how familiar Casinos are with these concepts. I'm not talking about the games of chance themselves, though that is also a good example of a flow activity (and you can see why gambling addiction could be a problem as a result). Take, for example, blackjack. The faster the dealer gets through a hand of blackjack, the higher the throughput of the table, and thus the more money a Casino would make. Casinos are all about probability, and the higher the throughput, the bigger their take. I seriously wonder if blackjack dealers are measured in some way (in terms of timing, not money).
Posted by Mark on July 10, 2011 at 07:44 PM .: link :.


End of This Day's Posts

Wednesday, May 25, 2011

How Boyd Wrote
I'm currently reading a biography of John Boyd, and in light of Sunday's post, I found a recent chapter particularly interesting. Boyd was a Fighter Pilot in the Air Force. He flew in Korea, made a real name for himself at Fighter Weapons School (which was later copied by the Navy - you may have heard of their version: Top Gun), and spent the latter part of his career working on groundbreaking strategic theories. He was an instructor at FWS for several years, and before leaving, he made his first big contributions to the Air Force. He wrote a tactics manual called Aerial Attack Study. Despite the passage of Vietnam and the Gulf War, nothing substantial has been added to it. It's served as the official tactics manual all over the world for over 40 years (actually, more like 50 at this point).

And Boyd almost didn't write it. Robert Coram (the author of the aforementioned biography) summarizes the unconventional manner in which the manual was written (on page 104 of my edition):
Boyd could not write the manual and continue flying and teaching; there simply wasn't enough time. Plus, the idea of sitting down at a desk and spending hundreds of hours writing a long document brought him to the edge of panic. He was a talker, not a writer. When he talked his ideas tumbled back and forth and he fed off the class and distilled his thoughts to the essence. But writing meant precision. And once on paper, the ideas could not be changed. ...

Spradling came up with the solution. "John, don't make this a big thing. We have some good Dictaphones. Why don't you just dictate the damn thing?"
It's a subject I didn't really cover much in my last post: the method of communication can impact the actual message. The way we communicate changes the way we think. Would Boyd's work have been as great if he didn't dictate it? Maybe, but it probably wouldn't have been the same.

Incidentally, I don't normally go in for biographies, but this is an excellent book so far. Part of that may be that Boyd is a genuinely interesting guy and that he was working on stuff that interests me, but I'm still quite enjoying myself.
Posted by Mark on May 25, 2011 at 08:09 PM .: link :.


End of This Day's Posts

Sunday, May 22, 2011

Communication
About two years ago (has it really been that long!?), I wrote a post about Interrupts and Context Switching. As long and ponderous as that post was, it was actually meant to be part of a larger series of posts. This post is meant to be the continuation of that original post and hopefully, I'll be able to get through the rest of the series in relatively short order (instead of dithering for another couple years). While I'm busy providing context, I should also note that this series was also planned for my internal work blog, but in the spirit of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Obviously, some of the specifics of my workplace have been removed from what follows, but it should still contain enough general value to be worthwhile.

In the previous post, I wrote about how computers and humans process information and in particular, how they handle switching between multiple different tasks. It turns out that computers are much better at switching tasks than humans are (for reasons belabored in that post). When humans want to do something that requires a lot of concentration and attention, such as computer programming or complex writing, they tend to work best when they have large amounts of uninterrupted time and can work in an environment that is quiet and free of distractions. Unfortunately, such environments can be difficult to find. As such, I thought it might be worth examining the source of most interruptions and distractions: communication.

Of course, this is a massive subject that can't even be summarized in something as trivial as a blog post (even one as long and bloviated as this one is turning out to be). That being said, it's worth examining in more detail because most interruptions we face are either directly or indirectly attributable to communication. In short, communication forces us to do context switching, which, as we've already established, is bad for getting things done.

Let's say that you're working on something large and complex. You've managed to get started and have reached a mental state that psychologists refer to as flow (also colloquially known as being "in the zone"). Flow is basically a condition of deep concentration and immersion. When you're in this state, you feel energized and often don't even recognize the passage of time. Seemingly difficult tasks no longer feel like they require much effort and the work just kinda... flows. Then someone stops by your desk to ask you an unrelated question. As a nice person and an accomodating coworker, you stop what you're doing, listen to the question and hopefully provide a helpful answer. This isn't necessarily a bad thing (we all enjoy helping other people out from time to time) but it also represents a series of context switches that would most likely break you out of your flow.

Not all work requires you to reach a state of flow in order to be productive, but for anyone involved in complex tasks like engineering, computer programming, design, or in-depth writing, flow is a necessity. Unfortunately, flow is somewhat fragile. It doesn't happen instantaneously; it requires a transition period where you refamiliarize yourself with the task at hand and the myriad issues and variables you need to consider. When your collegue departs and you can turn your attention back to the task at hand, you'll need to spend some time getting your brain back up to speed.

In isolation, the kind of interruption described above might still be alright every now and again, but imagine if the above scenario happened a couple dozen times in a day. If you're supposed to be working on something complicated, such a series of distractions would be disasterous. Unfortunately, I work for a 24/7 retail company and the nature of our business sometimes requires frequen interruptions and thus there are times when I am in a near constant state of context switching. Noe of this is to say I'm not part of the problem. I am certainly guilty of interrupting others, sometimes frequently, when I need some urgent information. This makes working on particularly complicated problems extremely difficult.

In the above example, there are only two people involved: you and the person asking you a question. However, in most workplace environments, that situation indirectly impacts the people around you as well. If they're immersed in their work, an unrelated conversation two cubes down may still break them out of their flow and slow their progress. This isn't nearly as bad as some workplaces that have a public address system - basically a way to interrupt hundreds or even thousands of people in order to reach one person - but it does still represent a challenge.

Now, the really insideous part about all this is that communication is really a good thing, a necessary thing. In a large scale organization, no one person can know everything, so communication is unavoidable. Meetings and phone calls can be indispensible sources of information and enablers of collaboration. The trick is to do this sort of thing in a way that interrupts as few people as possible. In some cases, this will be impossible. For example, urgency often forces disruptive communication (because you cannot afford to wait for an answer, you will need to be more intrusive). In other cases, there are ways to minimize the impact of frequent communication.

One way to minimize communication is to have frequently requested information documented in a common repository, so that if someone has a question, they can find it there instead of interrupting you (and potentially those around you). Naturally, this isn't quite as effective as we'd like, mostly because documenting information is a difficult and time consuming task in itself and one that often gets left out due to busy schedules and tight timelines. It turns out that documentation is hard! A while ago, Shamus wrote a terrific rant about technical documentation:
The stereotype is that technical people are bad at writing documentation. Technical people are supposedly inept at organizing information, bad at translating technical concepts into plain English, and useless at intuiting what the audience needs to know. There is a reason for this stereotype. It’s completely true.
I don't think it's quite as bad as Shamus points out, mostly because I think that most people suffer from the same issues as technical people. Technology tends to be complex and difficult to explain in the first place, so it's just more obvious there. Technology is also incredibly useful because it abstracts many difficult tasks, often through the use of metaphors. But when a user experiences the inevitable metaphor shear, they have to confront how the system really works, not the easy abstraction they've been using. This descent into technical details will almost always be a painful one, no matter how well documented something is, which is part of why documentation gets short shrift. I think the fact that there actually is documentation is usually a rather good sign. Then again, lots of things aren't documented at all.

There are numerous challenges for a documentation system. It takes resources, time, and motivation to write. It can become stale and inaccurate (sometimes this can happen very quickly) and thus it requires a good amount of maintenance (this can involve numerous other topics, such as version histories, automated alert systems, etc...). It has to be stored somewhere, and thus people have to know where and how to find it. And finally, the system for building, storing, maintaining, and using documentation has to be easy to learn and easy to use. This sounds all well and good, but in practice, it's a nonesuch beast. I don't want to get too carried away talking about documentation, so I'll leave it at that (if you're still interested, that nonesuch beast article is quite good). Ultimately, documentation is a good thing, but it's obviously not the only way to minimize communication strain.

I've previously mentioned that computer programming is one of those tasks that require a lot of concentration. As such, most programmers abhor interruptions. Interestingly, communication technology has been becoming more and more reliant on software. As such, it should be no surprise that a lot of new tools for communication are asynchronous, meaning that the exchange of information happens at each participant's own convenience. Email, for example, is asynchronous. You send an email to me. I choose when I want to review my messages and I also choose when I want to respond. Theoretically, email does not interrupt me (unless I use automated alerts for new email, such as the default Outlook behavior) and thus I can continue to work, uninterrupted.

The aformentioned documentation system is also a form of asynchronous communication and indeed, most of the internet itself could be considered a form of documentation. Even the communication tools used on the web are mostly asynchronous. Twitter, Facebook, YouTube, Flickr, blogs, message boards/forums, RSS and aggregators are all reliant on asynchronous communication. Mobile phones are obviously very popular, but I bet that SMS texting (which is asynchronous) is used just as much as voice, if not moreso (at least, for younger people). The only major communication tools invented in the past few decades that wouldn't be asynchronous are instant messaging and chat clients. And even those systems are often used in a more asynchronous way than traditional speech or conversation. (I suppose web conferencing is a relatively new communication tool, though it's really just an extension of conference calls.)

The benefit of asynchronous communication is, of course, that it doesn't (or at least it shouldn't) represent an interruption. If you're immersed in a particular task, you don't have to stop what you're doing to respond to an incoming communication request. You can deal with it at your own convenience. Furthermore, such correspondence (even in a supposedly short-lived medium like email) is usually stored for later reference. Such records are certainly valuable resources. Unfortunately, asynchronous communication has it's own set of difficulties as well.

Miscommunication is certainly a danger in any case, but it seems more prominent in the world of asynchronous communication. Since there is no easy back-and-forth in such a method, there is no room for clarification and one is often left only with their own interpretation. Miscommunication is doubly challenging because it creates an ongoing problem. What could have been a single conversation has now ballooned into several asynchronous touch-points and even the potential for wasted work.

One of my favorite quotations is from Anne Morrow Lindbergh:
To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!
It's difficult to beat the endless nuance of face-to-face communication, and for some discussions, nothing else will do. But as Lindbergh notes, communication is, in itself, a difficult proposition. Difficult, but necessary. About the best we can do is to attempt to minimize the misunderstanding.

I suppose one way to mitigate the possibility of miscommunication is to formalize the language in which the discussion is happening. This is easier said than done, as our friends in the legal department would no doubt say. Take a close look at a formal legal contract and you can clearly see the flaws in formal language. They are ostensibly written in English, but they require a lot of effort to compose or to read. Even then, opportunities for miscommunication or loopholes exist. Such a process makes sense when dealing with two separate organizations that each have their own agenda. But for internal collaboration purposes, such a formalization of communication would be disastrous.

You could consider computer languages a form of formal communication, but for most practical purposes, this would also fall short of a meaningful method of communication. At least, with other humans. The point of a computer language is to convert human thought into computational instructions that can be carried out in an almost mechanical fashion. While such a language is indeed very formal, it is also tedious, unintuitive, and difficult to compose and read. Our brains just don't work like that. Not to mention the fact that most of the communication efforts I'm talking about are the precursors to the writing of a computer program!

Despite all of this, a light formalization can be helpful and the fact that teams are required to produce important documentation practically requires a compromise between informal and formal methods of communication. In requirements specifications, for instance, I have found it quite beneficial to formally define various systems, acronyms, and other jargon that is referenced later in the document. This allows for a certain consistency within the document itself, and it also helps establish guidelines surrounding meaningful dialogue outside of the document. Of course, it wouldn't quite be up to legal standards and it would certainly lack the rigid syntax of computer languages, but it can still be helpful.

I am not an expert in linguistics, but it seems to me that spoken language is much richer and more complex than written language. Spoken language features numerous intricacies and tonal subtleties such as inflections and pauses. Indeed, spoken language often contains its own set of grammatical patterns which can be different than written language. Furthermore, face-to-face communication also consists of body language and other signs that can influence the meaning of what is said depending on the context in which it is spoken. This sort of nuance just isn't possible in written form.

This actually illustrates a wider problem. Again, I'm no linguist and haven't spent a ton of time examining the origins of language, but it seems to me that language emerged as a more immediate form of communication than what we use it for today. In other words, language was meant to be ephemeral, but with the advent of written language and improved technological means for recording communication (which are, historically, relatively recent developments), we're treating it differently. What was meant to be short-lived and transitory is now enduring and long-lived. As a result, we get things like the ever changing concept of political-correctness. Or, more relevant to this discussion, we get the aforementioned compromise between formal and informal language.

Another drawback to asynchronous communication is the propensity for over-communication. The CC field in an email can be a dangerous thing. It's very easy to broadcast your work out to many people, but the more this happens, the more difficult it becomes to keep track of all the incoming stimuli. Also, the language used in such a communication may be optimized for one type of reader, while the audience may be more general. This applies to other asynchronous methods as well. Documentation in a wiki is infamously difficult to categorize and find later. When you have an army of volunteers (as Wikipedia does), it's not as large a problem. But most organizations don't have such luxuries. Indeed, we're usually lucky if something is documented at all, let alone well organized and optimized.

The obvious question, which I've skipped over for most of this post (and, for that matter, the previous post), is: why communicate in the first place? If there are so many difficulties that arise out of communication, why not minimize such frivolities so that we can get something done?

Indeed, many of the greatest works in history were created by one mind. Sometimes, two. If I were to ask you to name the greatest inventor of all time, what would you say? Leonardo da Vinci or perhaps Thomas Edison. Both had workshops consisting of many helping hands, but their greatest ideas and conceptual integrity came from one man. Great works of literature? Shakespeare is the clear choice. Music? Bach, Mozart, Beethoven. Painting? da Vinci (again!), Rembrandt, Michelangelo. All individuals! There are collaborations as well, but usually only among two people. The Wright brothers, Gilbert and Sullivan, and so on.

So why has design and invention gone from solo efforts to group efforts? Why do we know the names of most of the inventors of 19th and early 20th century innovations, but not later achievements? For instance, who designed the Saturn V rocket? No one knows that, because it was a large team of people (and it was the culmination of numerous predecessors made by other teams of people). Why is that?

The biggest and most obvious answer is the increasing technological sophistication in nearly every area of engineering. The infamous Lazarus Long adage that "Specialization is for insects." notwithstanding, the amount of effort and specialization in various fields is astounding. Take a relatively obscure and narrow branch of mechanical engineering like Fluid Dynamics, and you'll find people devoting most of their life to the study of that field. Furthermore, the applications of that field go far beyond what we'd assume. Someone tinkering in their garage couldn't make the Saturn V alone. They'd require too much expertise in a wide and disparate array of fields.

This isn't to say that someone tinkering in their garage can't create something wonderful. Indeed, that's where the first personal computer came from! And we certainly know the names of many innovators today. Mark Zuckerberg and Larry Page/Sergey Brin immediately come to mind... but even their inventions spawned large companies with massive teams driving future innovation and optimization. It turns out that scaling a product up often takes more effort and more people than expected. (More information about the pros and cons of moving to a collaborative structure will have to wait for a separate post.)

And with more people comes more communication. It's a necessity. You cannot collaborate without large amounts of communication. In Tom DeMarco and Timothy Lister's book Peopleware, they call this the High-Tech Illusion:
...the widely held conviction among people who deal with any aspect of new technology (as who of us does not?) that they are in an intrinsically high-tech business. ... The researchers who made fundamental breakthroughs in those areas are in a high-tech business. The rest of us are appliers of their work. We use computers and other new technology components to develop our products or to organize our affairs. Because we go about this work in teams and projects and other tightly knit working groups, we are mostly in the human communication business. Our successes stem from good human interactions by all participants in the effort, and our failures stem from poor human interactions.
(Emphasis mine.) That insight is part of what initially inspired this series of posts. It's very astute, and most organizations work along those lines, and thus need to figure out ways to account for the additional costs of communication (this is particularly daunting, as such things are notoriously difficult to measure, but I'm getting ahead of myself). I suppose you could argue that both of these posts are somewhat inconclusive. Some of that is because they are part of a larger series, but also, as I've been known to say, human beings don't so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Recognizing and acknowledging the problems introduced by collaboration and communication is vital to working on any large project. As I mentioned towards the beginning of this post, this only really scratches the surface of the subject of communication, but for the purposes of this series, I think I've blathered on long enough. My next topic in this series will probably cover the various difficulties of providing estimates. I'm hoping the groundwork laid in these first two posts will mean that the next post won't be quite so long, but you never know!
Posted by Mark on May 22, 2011 at 07:51 PM .: link :.


End of This Day's Posts

Wednesday, March 30, 2011

Artificial Memory
Nicholas Carr cracks me up. He's a skeptic of technology, and in particular, the internet. He's the the guy who wrote the wonderfully divisive article, Is Google Making Us Stupid? The funny thing about all this is that he seems to have gained the most traction on the very platform he criticizes so much. Ultimately, though, I think he does have valuable insights and, if nothing else, he does raise very interesting questions about the impacts of technology on our lives. He makes an interesting counterweight to the techno-geeks who are busy preaching about transhumanism and the singularity. Of course, in a very real sense, his opposition dooms him to suffer from the same problems as those he criticizes. Google and the internet may not be a direct line to godhood, but it doesn't represent a descent into hell either. Still, reading some Carr is probably a good way to put techno-evangelism into perspective and perhaps reach some sort of Hegelian synthesis of what's really going on.

Otakun recently pointed to an excerpt from Carr's latest book. The general point of the article is to examine how human memory is being conflated with computer memory, and whether or not that makes sense:
...by the middle of the twentieth century memorization itself had begun to fall from favor. Progressive educators banished the practice from classrooms, dismissing it as a vestige of a less enlightened time. What had long been viewed as a stimulus for personal insight and creativity came to be seen as a barrier to imagination and then simply as a waste of mental energy. The introduction of new storage and recording media throughout the last century—audiotapes, videotapes, microfilm and microfiche, photocopiers, calculators, computer drives—greatly expanded the scope and availability of “artificial memory.” Committing information to one’s own mind seemed ever less essential. The arrival of the limitless and easily searchable data banks of the Internet brought a further shift, not just in the way we view memorization but in the way we view memory itself. The Net quickly came to be seen as a replacement for, rather than just a supplement to, personal memory. Today, people routinely talk about artificial memory as though it’s indistinguishable from biological memory.
While Carr is perhaps more blunt than I would be, I have to admit that I agree with a lot of what he's saying here. We often hear about how modern education is improved by focusing on things like "thinking skills" and "problem solving", but the big problem with emphasizing that sort of work ahead of memorization is that the analysis needed for such processes require a base level of knowledge in order to be effective. This is something I've expounded on at length in a previous post, so I won't rehash that here.

The interesting thing about the internet is that it enables you to get to a certain base level of knowledge and competence very quickly. This doesn't come without it's own set of challenges, and I'm sure Carr would be quick to point out that such a crash course would yield a false sense of security on us hapless internet users. After all, how do we know when we've reached that base level of confidence? Our incompetence could very well be masking our ability to recognize our incompetence. However, I don't think that's an insurmountable problem. Most of us that use the internet a lot view it as something of a low-trust environment, which can, ironically, lead to a better result. On a personal level, I find that what the internet really helps with is to determine just how much I don't know about a subject. That might seem like a silly thing to say, but even recognizing that your unknown unknowns are large can be helpful.

Some other assorted thoughts about Carr's excerpt:
  • I love the concept of a "commonplace book" and immediately started thinking of how I could implement one... which is when I realized that I've actually been keeping one, more or less, for the past 10 or so years on this blog. That being said, it's something I wouldn't mind becoming more organized about, and I've got some interesting ideas about what my personal take on a commonplace would look like.
  • Carr insists that the metaphor that portrays the brain as a computer is wrong. It's a metaphor I've certainly used in the past, though I think what I find most interesting about that metaphor is how different computers and brains really are. The problem with the metaphor is that our brains work nothing even remotely like the way our current computers actually work. However, many of the concepts of computer science and engineering can be useful in helping to model how the brain works. I'm certainly not an expert on the subject, but for example: You could model the brain as a binary computer because our neurons are technically binary. However, our neurons don't just turn on or off, they pulse, and things like frequency and duration can yield dramatically different results. Not to mention the fact that the brain seems to be a massively parallel computing device, as opposed to the mostly serial electronic tools we use. That is, of course, a drastic simplification, but you get the point. The metaphor is flawed, as all metaphors are, but it can also be useful.
  • One thing that Carr doesn't really get into (though he may cover this in a later chapter) is how notoriously unreliable human memory actually is. Numerous psychological studies show just how impressionable and faulty our memory of an event can be. This doesn't mean we should abandon our biological memory, just that having an external, artificial memory of an event (i.e. some sort of recording) can be useful in helping to identify and shape our perceptions.
  • Of course, even recordings can yield a false sense of truth, so things like Visual Literacy are still quite important. And again, one cannot analyze said recordings accurately without a certain base set of knowledge about what we're looking at - this is another concept that has been showing up on this blog for a while now as well: Exformation.
And that's probably enough babbling about Carr's essay. I generally disagree with the guy, but on this particular subject, I think we're more in agreement.
Posted by Mark on March 30, 2011 at 06:06 PM .: link :.


End of This Day's Posts

Wednesday, February 02, 2011

Anecdotal
I'm currently reading Cognitive Surplus: Creativity and Generosity in a Connected Age, by Clay Shirky. There seems to be a pattern emerging from certain pop-science books I've been reading in the past few years. Namely, a heavy reliance on fascinating anecdotes, counter-intuitive psychology experiments, and maybe a little behavioral economics thrown in for good measure. Cognitive Surplus most certainly fits the mold. Another book I've read recently, How We Decide by Jonah Lehrer, also fits. Most of Malcolm Gladwell's work does too (indeed, he's a master of the anecdote).

I don't think there's anything inherently wrong with this format. In fact, it can be quite entertaining and sometimes even informative. But sometimes I feel a bit uncomfortable with the conclusions that are drawn from all of this. Anecdotes, even well documented anecdotes, can make for great reading, but that doesn't necessarily make them broadly applicable. Generalizing or extrapolating from anecdotes can lead to some problematic conclusions. This is a difficult subject to tackle though, because humans seem to be hard wired to do exactly that. The human brain is basically a giant heuristic machine.

This is not a bad thing. Heuristics are an important part of human life because we usually don't always have all the information needed to use a more reliable, logical process. We all extrapolate from our own experiences; that is to say, we rely on anecdotal evidence in our daily lives all the time. It allows us to operate in situations which we do not understand.

Unfortunately, it's also subjective and not entirely reliable. The major issue is that it's rather easy to convince yourself that you have properly understand the problem, when in fact, you don't. In other words, our incompetence masks our ability to recognize our incompetence. As a result, we see things like Cargo Cults. Security beliefs and superstitions are also heuristics, albeit generally false ones. But they arise because producing such explanations are a necessary part of our life. We cannot explain everything we see, and since we often need to act on what we see, we must rely on less than perfect heuristics and processes.

So in a book like Cognitive Surplus, there's this instinctual impulse to agree with conclusions extrapolated from anecdotes, which is probably the source of my discomfort. It's not that I doubt the factual content of the anecdotes, it's that I'm not always sure how to connect the anecdote with the conclusion. In many cases, it seems like an intuitive leap, but as previously noted, this is a subjective process.

Of course, Shirky does not rely solely on anecdotal evidence in his book (nor do the other authors mentioned above). There are the aforementioned psychology experiments and behavioral economics studies that rely on the scientific notions of strictly controlled conditions and independent reproduction. The assumption is that conclusions extrapolated from this more scientific data are more reliable. But is it possible that they could suffer from the same problems as anecdotes?

Maybe. The data is almost always presented in an informal, summarized format (very similar, in fact, to the way anecdotes are formed), which can leave a lot of wiggle room. For instance, strictly controlled conditions necessary to run an experiment can yield qualifying factors that will make the results less broadly applicable than we may desire. I find this less troubling in cases where I'm already familiar with a study, such as the Ultimatum Game. It also helps that such a study has been independently reproduced countless times since it first appeared, and that many subsequent tests have refined various conditions and variables to see how the results would come out (and they all point in the expected direction).

Later in the book, Shirky references an economic study performed on 10 day-care centers in Haifa, Israel. I will not get into the details of the study (this post is not a review of Shirky's book, after all), except to say that it was a single study, performed in a narrow location, with a relatively small data set. I don't doubt the objective results, but unlike the Ultimatum Game, this study does not seem to have a long history of reproduction, nor did the researchers conduct obvious follow-up experiments (perhaps there are additional studies, but they are not referenced by Shirky). The results seem to violate certain economic assumptions we're all familiar with, but they are also somewhat intuitive when you realize why the results came out the way they did. On the other hand, how do we know why they came out that way? I'm virtually certain that if you vary one particular variable of the experiment, you'll receive the expected result. Then what?

I don't mean to imply that these books are worthless or that they don't contain valuable insights. I generally find them entertaining, helpful and informative, sometimes even persuasive. I like reading them. However, reading a book like this is not a passive activity. It's a dialogue. In other words, I don't think that Cognitive Surplus is the last word on the subjects that Shirky is writing about, despite a certain triumphal tone in his writing. It's important to recognize that there is probably more to this book than what is on the page. That's why there's a lengthly Notes section with references to numerous papers and studies for further reading and clarification. Cognitive Surplus raises some interesting questions and it proposes some interesting answers, but it's not the end of the conversation.

Update: I thought of a few books that I think are better about this sort of thing, and there's a commonality that's somewhat instructive. One example is The Paradox of Choice: Why More Is Less, by Barry Schwartz. Another is Flow: The Psychology of Optimal Experience by Mihaly Csikszentmihalyi. The interesting thing about both of these books is that they are written by researchers who have conducted a lot of the research themselves. Both of them are very careful in the way they phrase their conclusions, making sure to point out qualifying factors, etc... Shirky, Gladwell, etc... seem to be summarizing the work of others. This is also valuable, in its own way, but perhaps less conclusive? (Then again, correlation does not necessarily mean causation. This update basically amounts to heuristic, and one based on the relatively small sample of pop-science books I've read, so take it with a grain of salt.)

Again Update: I wrote this post before finishing Cognitive Surplus. I'm now finished, and in the last chapter, Shirky notes (pages 191-192):
The opportunity we collectively share, though, is much larger than even a book's worth of examples can express, because those examples, and especially the ones that involve significant cultural disruption, could turn out to be special cases. As with previous revolutions driven by technology - whether it is the rise of literate and scientific culture with the spread of the printing press or the economic and social globalization that followed the invention of the telegraph - what matters now is not the new capabilities that we have, but how we turn those capabilities, both technical and social, into opportunities.
In short, I think Shirky is acknowledging what was making me uncomfortable throughout the book: anecdotes and examples can't paint the whole picture. Shirky's book is not internet triumphalism, but a call to action. I suppose you could argue that even the assertion that these opportunities exist at all is a form of triumphalism, but I don't think so.
Posted by Mark on February 02, 2011 at 08:27 PM .: link :.


End of This Day's Posts

Sunday, November 21, 2010

Adventures in Brewing - Part 2: The Bottling
A couple of weeks ago, I started brewing an English Brown Ale. After two weeks in the fermenter, I went ahead and bottled the beer this weekend. Just another couple of weeks in the bottle to condition, and they should be ready to go (supposedly, the impatient can try it after a week, which I might have to do, just to see what it's like and how it ages).

The final gravity ended up at around 1.008, so if my calculations (and my hydrometer readings, which are probably more approximate than I'd like) are correct, this should yield something around 4.5% alcohol. Both my hydrometer readings were a bit low according to the worksheet/recipe I was using, but that ABV is right in the middle of the range. I suspect this means there won't be as much sugar in the beer and thus the taste will be a bit less powerful, but I guess we'll find out.

I ended up with a little more than a case and a half of bottled beer, which is probably a bit low. I was definitely overcautious about racking the beer to my bottling bucket. Not wanting to transfer any yeast and never having done it before, I was a little too conservative in stopping the siphoning process (which was a lot easier and faster than I was expecting - just add the priming sugar and get the siphon started and it only took a few minutes to transfer the grand majority of the beer to the bottling bucket). Next time I should be able to get around two full cases out of a 5 gallon batch.

Once in the bottling bucket, the process went pretty smoothly, and I actually found filling the bottles up and capping them to be pretty fun (the bottling wand seems like a life saver - I'd hate to do this with just a tube). Once I got towards the bottom of the bucket, it was a bit of a challenge to get as much out of there as possible without oxidizing the beer too much. I managed to get myself a quick cup of the beer and took a few sips. Of course, it was room temperature and not carbonated enough (carbonation happens in the bottle, thanks to the priming sugar), but it sure was beer. I didn't detect anything "off" about the taste, and it smelled pretty good too. Maybe I managed to not screw it up!
Beer Siphon
Siphoning the beer
The worst part of the process was really the sanitation piece. Washing and scrubbing two cases of beer bottles, then getting them to dry out (as much as I could - I'm sure some still had some water in them when I was bottling, which is probably bad) was a huge, tedious pain in the butt. That was probably the most time consuming portion of the process. The actual bottling/capping probably took the same amount of time, but that was more fun. It probably took a little over 2 hours in total, which actually wasn't that bad. In the end, I'm pretty happy with my first experience in brewing. Even if the beer turns out terrible or bland, I feel like I've learned a lot and will undoubtedly have an easier time of it in the next round. Speaking of which, I'm looking to put together a recipe for a Belgian Style Tripel. This will be a higher gravity beer and probably take longer to brew, but it's one of my favorite styles and it's apparently not that difficult either.

(Cross posted at the Kaedrin Beer Blog, along with some other stuff posted today)
Posted by Mark on November 21, 2010 at 07:04 PM .: link :.


End of This Day's Posts

Wednesday, November 10, 2010

Mute
Earlier in the year, I had noticed a pile of books building up on the shelf and have made a concerted effort to get through them. This has gone smoothly at times, and at other times it's ground to a halt. Then there's the fact that I can't seem to stop buying new books to read. Case in point, during the Six Weeks of Halloween, I thought it might be nice to read some horror, and realized that most of what I had on my shelf was science fiction, fantasy, detective fiction, or non-fiction (history, technology, biography, etc...) So I went out and picked up a collection of Richard Matheson short stories called Button, Button (the title story was the source material for a very loose film adaptation, The Box).

It was a very interesting collection of stories, many of which play on variations of the moral dilemma most famous in the title story, Button, Button:
"If you push the button," Mr Steward told him, "somewhere in the world, someone you don't know will die. In return for which you will receive fifty thousand dollars."
In the film adaptation, the "reward" was raised to a million dollars, but then, they also added a ton of other stuff to what really amounts for a tight, 12 page story. Anyway, there are lots of other stories, most containing some sort of moral dilemma along those lines (or someone exploiting such a dilemma). In particular, I enjoyed A Flourish of Strumpets and No Such Thing as a Vampire, but I found myself most intrigued by one of the longer stories, titled Mute. I suppose mild spoilers ahead, if this is something you think you might want to read.

The story concerns a child named Paal. His parents were recent immigrants and he was homeschooled, but his parents died in a fire, leaving Paal to the care of the local Sheriff and his wife. Paal is a mute, and the community is quite upset by this. Paal ends up being sent to school, but his seeming lack of communication skills cause issues, and the adults continually attempt to get Paal to talk.

I will leave it at that for now, but if you're at all familiar with Matheson, you can kinda see where this was going. What struck me most was how much a sign of the times this story was. Of course, all art is a product of its cultural and historical context, but for horror stories, that must be doubly so. Most of the stories in this collection were written and published in the 1950s and early 1960s, which I find interesting. With respect to this story, it's primarily about the crushing pressure of conformity, something that was surely on Matheson's mind after having just finished of the uniformity of the 1950s. The cultural norms of the 50s were perhaps overly traditional, but after having witnessed the deadliest conflict in human history in the 1940s, you can hardly blame people for wanting some semblance of tradition and stability in their lives. Of course, that sort of uniformity isn't really natural evil, and like a pendulum, things swing from one extreme to the other, until eventually things settle down. Or not.

Anyway, writing in the early 60s (or maybe even the late 50s), Matheson was clearly disturbed by the impulse to force conformity, and Mute is a clear expression of this anxiety. Interestingly, the story is almost as horrific in today's context, but for different reasons. Matheson was writing in response to a society that had been emphasizing conformity and had no doubt witness such abuses himself. Interestingly, the end of the story is somewhat bittersweet. It's not entirely tragic, and it's almost an acknowledgement that conformity isn't necessarily evil.
It was not something easily judged, he was thinking. There was no right or wrong of it. Definitely, it was not a case of evil versus good. Mrs. Wheeler, the sheriff, the boy's teacher, the people of German Corners - they had, probably, all meant well. Understandably, they had been outraged at the idea of a seven-year-old boy not having been taught to speak by his parents. Their actions were, in light of that, justifiable and good.

It was simply that, so often, evil could come of misguided good.
In today's world, we see the opposite of the 1950s in many ways. Emphasis is no longer placed on conformity (well, perhaps it still is in some places), but rather a rugged individuality. There are no one-size fits all pieces of culture anymore. We've got hundreds of varieties of spaghetti sauce, thousands of music choices that can fit on a device the size of a business card, movies that are designed to appeal to small demographics, and so on. We deal with problems like the paradox of choice, and the internet has given rise to the niche and concepts like the Long Tail. Of course, rigid non-conformity is, in itself, a form of conformity, but I can't imagine a story like Mute being written in this day and age. A comparable story would be about how lost someone becomes when they don't conform to societal norms...
Posted by Mark on November 10, 2010 at 09:23 PM .: link :.


End of This Day's Posts

Sunday, September 05, 2010

Tasting Notes...
Another edition of Tasting Notes, a series of quick hits on a variety of topics that don't really warrant a full post. So here's what I've been watching/playing/reading/drinking lately:

Television
  • The only show I watch regularly is True Blood, and even that has been a bit of a bust this season. There are some good things about this season, but it seems like all the side characters are annoying this season. Even Lafayette seems to be getting annoying. You can't keep increasing the number of big character arcs indefinitely, and this season definitely hit the limit and then stomped over it. All that being said, it's still an entertaining show, and last week's cliffhanger was kinda interesting, except that I know better than to trust that it will be conclusive, which is probably a bad thing. Unless it turns out the way I expect, which is kinda ironic. A damned if you do, damned if you don't situation, I guess.
  • Netflix Watch Instantly Pick of the Week: Mythbusters. Yeah, we've all seen these epsiodes, but putting them on Netflix Watch Instantly is a problem. I didn't know they were on there until Shamus mentioned if off-handedly, and now I find myself watching them all the time.
Video Games
  • It turns out that I've played approximately 0 hours of GTA IV since the last Tasting Notes, so I'm thinking that I should just move on to something else. Complaints are, more or less, the same as last time. All the good things about the game are the same as GTA III, and all the new bits only seem to weigh it down. And for crying out loud, it's ok to let people save their games whenever. This is something that I've become pretty inflexible on - if you have static save points that force me to replay stuff and rewatch cutscenes, I'm not going to like your game much.
  • In lieu of GTA IV, I've been replaying Half-Life 2 on my PC. It's interesting how great that game is, despite its aged mechanics. It got me thinking about what would make for the ideal FPS game (perhaps a topic for another post).
  • Portal is fantastic, but you probably already knew that. Still, for a 3 hour gaming experience, it's just about perfect. I only got stuck a couple of times, and even then, it was fun piecing together what I needed to do... Well worth a play, even if you're not huge into gaming.
Movies
  • Machete is brilliant trash. Interestingly, Rodriguez takes the opportunity to address politics and make a point about immigration. This sort of hand-wringing would normally be annoying, but the mixture of polemic with gloriously over-the-top action, gratuitous nudity and violence, is actually pretty well balanced. On their own, those two elements would be cloying or frustrating. Mix them together, and you've got something altogether different, and it works really well. Also working well, Lindsay Lohan in a bit of self-aware stunt casting (I can't really say that the role "transcends" that with a straight face, but it does go further than simple exploitation). Not working so well: Jessica Alba. She's fine for most of the movie, but when it comes time for her to give an inspirational speech, it's kinda embarrassing. Danny Trejo, Michelle Rodriguez, Jeff Fahey, Cheech Marin, and Don Johnson (!?) are great. Robert De Niro and Steven Seagal are kinda sleepwalking through their roles, but they're fine. In the end, it's trashy fun, and I have a feeling it will stick with me more than other trashy summer fare.
  • The American, on the other hand, is slow, ponderous and ultimately pointless. A promising start, but rather than build on that, the tension evaporates as the film slowly grinds its way to an unsurprising conclusion. Poorly paced and not much to it...
  • Netflix Watch Instantly Pick of the Week: Beer Wars. A documentary about beer, featuring a pretty good cross-section of the craft brewing leaders in the US, as well as some interesting behind-the-scenes info about legal side of things and how the laws impact the rest of the distribution chain. Really, it's just fun to see interviews with some of my favorite brewers, like the guys from Dogfish Head and Stone brewing, or the Yuengling owner (who seems to get drunk and spill some beans). If you like beer, it's well worth a watch.
  • Netflix Watch Instantly Pick of the Week That I Haven't Even Seen Yet: Mother. This Korean thriller made waves in the film-nerd community earlier this year, so it's on my must watch list. Seems noirish.
Books The Finer Things (aka Beer!)
  • Brewery Ommegang is probably my favorite brewery in America, and I recently managed to get my hands on some of their more uncommon brews. BPA is a Belgian-style pale ale. Not as hoppy as an IPA, but also not quite as tasty as Ommegang's other beers. An interesting experiment, but not something I see myself turning to very often. Bière De Mars, on the other hand, is great. I think Ommegang's standards are pretty tough to beat, but that one holds its own. It's a seasonal beer and a limited batch; the one I found was from 2008. It was well worth the wait. There are a bunch of other Ommegang seasonals or specialty beers, but the one I really want to find is the Tripel Perfection. The Tripel is probably my favorite style of belgian beer, so I'd love to see Ommegang's take on it.
  • Some interesting stuff in my fridge: Saison Du BUFF is a collaboration between three local breweries. This batch is from Victory, but the formula was created by Victory, Stone, and Dogfish Head. I saw a case of the Dogfish Head somewhere, but didn't want to buy it until I tried it out. Also in the fridge: Fantôme Saison (this comes highly rated, but I haven't seen it around until now), and a few pumpkin or Octoberfest ales.
And that's all for now.
Posted by Mark on September 05, 2010 at 07:24 PM .: link :.


End of This Day's Posts

Wednesday, August 04, 2010

A/B Testing Spaghetti Sauce
Earlier this week I was perusing some TED Talks and ran across this old (and apparently popular) presentation by Malcolm Gladwell. It struck me as particularly relevant to several topics I've explored on this blog, including Sunday's post on the merits of A/B testing. In the video, Gladwell explains why there are a billion different varieties of Spaghetti sauce at most supermarkets:
Again, this video touches on several topics explored on this blog in the past. For instance, it describes the origins of what's become known as the Paradox of Choice (or, as some would have you believe, the Paradise of Choice) - indeed, there's another TED talk linked right off the Gladwell video that covers that topic in detail.

The key insight Gladwell discusses in his video is basically the destruction of the Platonic Ideal (I'll summarize in this paragraph in case you didn't watch the video, which covers the topic in much more depth). He talks about Howard Moskowitz, who was a market research consultant with various food industry companies that were attempting to optimize their products. After conducting lots of market research and puzzling over the results, Moskowitz eventually came to a startling conclusion: there is no perfect product, only perfect products. Moskowitz made his name working with spaghetti sauce. Prego had hired him in order to find the perfect spaghetti sauce (so that they could compete with rival company, Ragu). Moskowitz developed dozens of prototype sauces and went on the road, testing each variety with all sorts of people. What he found was that there was no single perfect spaghetti sauce, but there were basically three types of sauce that people responded to in roughly equal proportion: standard, spicy, and chunky. At the time, there were no chunky spaghetti sauces on the market, so when Prego released their chunky spaghetti sauce, their sales skyrocketed. A full third of the market was underserved, and Prego filled that need.

Decades later, this is hardly news to us and the trend has spread from the supermarket into all sorts of other arenas. In entertainment, for example, we're seeing a move towards niches. The era of huge blockbuster bands like The Beatles is coming to an end. Of course, there will always be blockbusters, but the really interesting stuff is happening in the niches. This is, in part, due to technology. Once you can fit 30,000 songs onto an iPod and you can download "free" music all over the internet, it becomes much easier to find music that fits your tastes better. Indeed, this becomes a part of peoples' identity. Instead of listening to the mass produced stuff, they listen to something a little odd and it becomes an expression of their personality. You can see evidence of this everywhere, and the internet is a huge enabler in this respect. The internet is the land of niches. Click around for a few minutes and you can easily find absurdly specific, single topic, niche websites like this one where every post features animals wielding lightsabers or this other one that's all about Flaming Garbage Cans In Hip Hop Videos (there are thousands, if not millions of these types of sites). The internet is the ultimate paradox of choice, and you're free to explore almost anything you desire, no matter how odd or obscure it may be (see also, Rule 34).

In relation to Sunday's post on A/B testing, the lesson here is that A/B testing is an optimization tool that allows you to see how various segments respond to different versions of something. In that post, I used an example where an internet retailer was attempting to find the ideal imagery to sell a diamond ring. A common debate in the retail world is whether that image should just show a closeup of the product, or if it should show a model wearing the product. One way to solve that problem is to A/B test it - create both versions of the image, segment visitors to your site, and track the results.

As discussed Sunday, there are a number of challenges with this approach, but one thing I didn't mention is the unspoken assumption that there actually is an ideal image. In reality, there are probably some people that prefer the closeup and some people who prefer the model shot. An A/B test will tell you what the majority of people like, but wouldn't it be even better if you could personalize the imagery used on the site depending on what customers like? Show the type of image people prefer, and instead of catering to the most popular segment of customer, you cater to all customers (the simple diamond ring example begins to break down at this point, but more complex or subtle tests could still show significant results when personalized). Of course, this is easier said than done - just ask Amazon, who does CRM and personalization as well as any retailer on the web, and yet manages to alienate a large portion of their customers every day! Interestingly, this really just shifts the purpose of A/B testing from one of finding the platonic ideal to finding a set of ideals that can be applied to various customer segments. Once again we run up against the need for more and better data aggregation and analysis techniques. Progress is being made, but I'm not sure what the endgame looks like here. I suppose time will tell. For now, I'm just happy that Amazon's recommendations aren't completely absurd for me at this point (which I find rather amazing, considering where they were a few years ago).
Posted by Mark on August 04, 2010 at 07:54 PM .: link :.


End of This Day's Posts

Sunday, August 01, 2010

Groundhog Day and A/B Testing
Jeff Atwood recently made a fascinating observation about the similarities between the classic film Groundhog Day and A/B Testing.

In case you've only recently emerged from a hermit-like existence, Groundhog Day is a film about Phil (played by Bill Murray). It seems that Phil has been doomed (or is it blessed) to live the same day over and over again. It doesn't seem to matter what he does during this day, he always wakes up at 6 am on Groundhog Day. In the film, we see the same day repeated over and over again, but only in bits and pieces (usually skipping repetitive parts). The director of the film, Harold Ramis, believes that by the end of the film, Phil has spent the equivalent of about 30 or 40 years reliving that same day.

Towards the beginning of the film, Phil does a lot of experimentation, and Atwood's observation is that this often takes the form of an A/B test. This is a concept that is perhaps a little more esoteric, but the principles are easy. Let's take a simple example from the world of retail. You want to sell a new ring on a website. What should the main image look like? For simplification purposes, let's say you narrow it down to two different concepts: one, a closeup of the ring all by itself, and the other a shot of a model wearing the ring. Which image do you use? We could speculate on the subject for hours and even rationalize some pretty convincing arguments one way or the other, but it's ultimately not up to us - in retail, it's all about the customer. You could "test" the concept in a serial fashion, but ultimately the two sets of results would not be comparable. The ring is new, so whichever image is used first would get an unfair advantage, and so on. The solution is to show both images during the same timeframe. You do this by splitting your visitors into two segments (A and B), showing each segment a different version of the image, and then tracking the results. If the two images do, in fact, cause different outcomes, and if you get enough people to look at the images, it should come out in the data.

This is what Phil does in Groundhog Day. For instance, Phil falls in love with Rita (played by Andie MacDowell) and spends what seems like months compiling lists of what she likes and doesn't like, so that he can construct the perfect relationship with her.
Phil doesn't just go on one date with Rita, he goes on thousands of dates. During each date, he makes note of what she likes and responds to, and drops everything she doesn't. At the end he arrives at -- quite literally -- the perfect date. Everything that happens is the most ideal, most desirable version of all possible outcomes on that date on that particular day. Such are the luxuries afforded to a man repeating the same day forever.

This is the purest form of A/B testing imaginable. Given two choices, pick the one that "wins", and keep repeating this ad infinitum until you arrive at the ultimate, most scientifically desirable choice.
As Atwood notes, the interesting thing about this process is that even once Phil has constructed that perfect date, Rita still rejects Phil. From this example and presumably from experience with A/B testing, Atwood concludes that A/B testing is empty and that subjects can often sense a lack of sincerity behind the A/B test.

It's an interesting point, but to be sure, I'm not sure it's entirely applicable in all situations. Of course, Atwood admits that A/B testing is good at smoothing out details, but there's something more at work in Groundhog's Day that Atwood is not mentioning. Namely, that Phil is using A/B testing to misrepresent himself as the ideal mate for Rita. Yes, he's done the experimentation to figure out what "works" and what doesn't, but his initial testing was ultimately shallow. Rita didn't reject him because he had all the right answers, she rejected him because he was attempting to deceive her. His was misrepresenting himself, and that certainly can lead to a feeling of emptiness.

If you look back at my example above about the ring being sold on a retail website, you'll note that there's no deception going on there. Somehow I doubt either image would result in a hollow feeling by the customer. Why is this different than Groundhog Day? Because neither image misrepresents the product, and one would assume that the website is pretty clear about the fact that you can buy things there. Of course, there are a million different variables you could test (especially once you get into text and marketing hooks, etc...) and some of those could be more deceptive than others, but most of the time, deception is not the goal. There is a simple choice to be made, instead of constantly wondering about your product image and second guessing yourself, why not A/B test it and see what customers like better?

There are tons of limitations to this approach, but I don't think it's as inherently flawed as Atwood seems to believe. Still, the data you get out of an A/B test isn't always conclusive and even if it is, whatever learnings you get out of it aren't necessarily applicable in all situations. For instance, what works for our new ring can't necessarily be applied to all new rings (this is a problem for me, as my employer has a high turnover rate for products - as such, the simple example of the ring as described above would not be a good test for my company unless the ring would be available for a very long time). Furthermore, while you can sometimes pick a winner, it's not always clear why it's a winner. This is especially the case when the differences between A and B are significant (for instance, testing an entirely redesigned page might yield results, but you will not know which of the changes to the page actually caused said results - on the other hand, A/B testing is really the only way to accurately calculate ROI on significant changes like that.)

Obviously these limitations should be taken into account when conducting an A/B test, and I think what Phil runs into in Groundhog's Day is a lack of conclusive data. One of the problems with interpreting inconclusive data is that it can be very tempting to rationalize the data. Phils initial attempts to craft the perfect date for Rita fail because he's really only scraping the surface of her needs and desires. In other words, he's testing the wrong thing, misunderstanding the data, and thus getting inconclusive results.

The interesting thing about the Groundhog's Day example is that, in the end, the movie is not a condemnation of A/B testing at all. Phil ultimately does manage to win the affections of Rita. Of course it took him decades to do so, and that's worth taking into account. Perhaps what the film is really saying is that A/B testing is often more complicated than it seems and that the only results you get depend on what you put into it. A/B testing is not the easy answer it's often portrayed as and it should not be the only tool in your toolbox (i.e. forcing employees to prove that using 3, 4 or 5 pixels for a border is ideal is probably going a bit too far ), but neither is it as empty as Atwood seems to be indicating. (And we didn't even talk about multivariate tests! Let's get Christopher Nolan on that. He'd be great at that sort of movie, wouldn't he?)
Posted by Mark on August 01, 2010 at 09:57 PM .: link :.


End of This Day's Posts

Wednesday, July 14, 2010

Tasting Notes...
So Nick from CHUD recently revived the idea of a "Tasting Notes..." post that features a bunch of disconnected, scattershot notes on a variety of topics that don't really warrant a full post. It sounds like fun, so here are a few tasting notes...

Television
  • The latest season of True Blood seems to be collapsing under the weight of all the new characters and plotlines. It's still good, but the biggest issue with the series is that nothing seems to happen from week to week. That's the problem when you have a series with 15 different subplots, I guess. The motif for this season seems to be to end each episode with Vampire Bill doing something absurdly crazy. I still have hope for the series, but it was much better when I was watching it on DVD/On Demand, when all the episodes are available so you don't have to wait a week between each episode.
  • Netflix Watch Instantly Pick of the Week: The Dresden Files. An underappreciated Sci-Fi (er, SyFy) original series based on a series of novels by Jim Butcher, this focuses on that other magician named Harry. This one takes the form of a creature-of-the-week series mixed with a bit of a police procedural, and it's actually pretty good. We're not talking groundbreaking or anything, but it's great disposable entertainment and well worth a watch if you like magic and/or police procedurals. Unfortunately, it only lasted about 12 episodes, so there's still some loose threads and whatnot, but it's still a fun series.
Video Games
  • A little late to the party (but not as late as some others), I've started playing Grand Theft Auto IV recently. It's a fine game, I guess, but I've had this problem with the GTA series ever since I played GTA III: There doesn't seem to be anything new or interesting in the game. GTA III was a fantastic game, and it seems like all of the myriad sequels since then have added approximately nothing to its legacy. Vice City and San Andreas added some minor improvements to various gameplay mechanics and whatnot, but they were ultimately the same game with some minor improvements. GTA IV seems basically like the same game, but with HD graphics. Also, is it me, or is it harder to drive around town without constantly spinning out? Maybe Burnout Paradise ruined me on GTA driving, which I used to think of as a lot of fun.
  • I have to admit that this year's E3 seems like a bit of a bust for me. Microsoft had Kinect, which looks like it will be a silly failure (not that it really matters for me, as I have a PS3). Sony has finally caught up to where the Wii was a few years ago with Move, and I don't particularly care, as motion control games have consistently disappointed me. Sony also seems to have bet the farm on 3D gaming, but that would require me to purchase a new $5,000 TV and $100 glasses for anyone who wants to watch. Also, there's the fact that I could care less about 3D. Speaking of which, Nintendo announced the 3DS, which is a portable gaming system with 3D that doesn't require glasses. This is neat, I guess, but I could really care less about portable systems. There are a couple of interesting games for the Wii, namely the new Goldeneye and the new Zelda, but in both cases, I'm a little wary. My big problem with Nintendo this generation has been that they didn''t do anything new or interesting after Wii Sports (and possibly Wii Fit). Everything else has been retreads of old games. There is a certain nostalgia value there, and I can enjoy some of those retreads (Mario Kart Wii was fun, but it's not really that different from a game that came out about 20 years ago, ditto for New Super Mario Brothers Wii, and about 10 other games), but at the same time, I'm getting sick of all that.
  • One game that was announced at E3 that I am looking forward to is called Journey. It's made by the same team as Flower and will hopefully be just as good.
  • Otherwise, I'll probably play a little more of GTA IV, just so I can get far enough to really cause some mayhem in Liberty City (this is another problem with a lot of sequels - you often start the sequel powered-down and have to build up various abilities that you're used to having) and pick up some games from last year, like Uncharted 2 and Batman: Arkham Asylum.
Movies
  • I saw Predators last weekend, and despite being a member of this year's illustrious Top 5 Movies I Want To See Even Though I Know They'll Suck list, I actually enjoyed it. Don't get me wrong, it's not fine cinema by any stretch of the imagination, but it knows where its bread is buttered and it hits all the appropriate beats. As MovieBob notes, this movie fills in the expected sequel trajectory of the Alien series. It's Aliens to Predator's Alien, if that makes any sense. In other words, it's Predator but with multiple predators and higher stakes. It's ultimately derivative in the extreme, but I really enjoyed the first movie, so that's not that bad. I mean, you've got the guy with the gatling gun, the tough ethnic girl who recognizes the predators, the tough ethnic guy who pulls off his shirt and faces the predator with a sword in hand to hand combat, and so on. Again, it's a fun movie, and probably the best since the original (although, that's not really saying much). Just don't hope for much in the way of anything new or exciting.
  • Netflix Watch Instantly Pick of the Week: The Girl with the Dragon Tattoo, for reasons expounded upon in Sunday's post.
  • Looking forward to Inception this weekend. Early reviews are positive, but I'm not really hoping for that much. Still in a light year for movies, this looks decent.
The Finer Things
  • A couple weekends ago, I went out on my deck on a gorgeous night and drank a beer whilst smoking a cigar. I'm pretty good with beer, so I feel confident in telling you that if you get the chance, Affligem Dubbel is an great beer. It has a dark amber color and a great, full bodied taste. It's as smooth as can be, but carbonated enough that it doesn't taste flat. All in all, one of my favorite recent discoveries. I know absolutely nothing about cigars, but I had an Avo Uvezian Notturno XO (it came in an orange tube). It's a bit smaller than most other cigars I've had, but I actually enjoyed it quite a bit. Again, a cigar connoisseur, I am not, so take this with a grain of salt.
  • I just got back from my monthly beer club meeting. A decent selection tonight, with the standout and surprise winner being The Woodwork Series - Acasia Barreled. It's a tasty double style beer (perhaps not as good as the aforementioned Affligem, but still quite good) and well worth a try (I'm now interested in trying the other styles, which all seem to be based around the type of barrel the beer is stored in). Other standouts included a homebrewed Triple (nice work Dana!), and, of course, someone brought Ommegang Abby Ale (another Dubbel!) which is a longtime favorite of mine. The beer I brought was a Guldenberg (Belgian tripel), but it must not have liked the car ride as it pretty much exploded when we opened it. I think it tasted a bit flat after that, but it had a great flavor and I think I will certainly have to try this again (preferably not shaking it around so much before I open it).
And I think that just about wraps up this edition of Tasting Notes, which I rather enjoyed writing and will probably try again at some point.
Posted by Mark on July 14, 2010 at 07:38 PM .: link :.


End of This Day's Posts

Sunday, July 04, 2010

Incompetence
Noted documentary filmmaker Errol Morris has been writing a series of posts about incompetence for the NY Times. The most interesting parts feature an interview with David Dunning, a psychologist whose experiments have discovered what's called the Dunning-Kruger Effect: our incompetence masks our ability to recognize our incompetence.
DAVID DUNNING: There have been many psychological studies that tell us what we see and what we hear is shaped by our preferences, our wishes, our fears, our desires and so forth. We literally see the world the way we want to see it. But the Dunning-Kruger effect suggests that there is a problem beyond that. Even if you are just the most honest, impartial person that you could be, you would still have a problem — namely, when your knowledge or expertise is imperfect, you really don’t know it. Left to your own devices, you just don’t know it. We’re not very good at knowing what we don’t know.
I found this interesting in light of my recent posting about universally self-affirming outlooks (i.e. seeing the world the way we want to see it). In any case, the interview continues:
ERROL MORRIS: Knowing what you don’t know? Is this supposedly the hallmark of an intelligent person?

DAVID DUNNING: That’s absolutely right. It’s knowing that there are things you don’t know that you don’t know. [4] Donald Rumsfeld gave this speech about “unknown unknowns.” It goes something like this: “There are things we know we know about terrorism. There are things we know we don’t know. And there are things that are unknown unknowns. We don’t know that we don’t know.” He got a lot of grief for that. And I thought, “That’s the smartest and most modest thing I’ve heard in a year.”
It may be smart and modest, but that sort of thing usually gets politicians in trouble. But most people aren't politicians, and so it's worth looking into this concept a little further. An interesting result of this effect is that a lot of the smartest, most intelligent people also tend to be somewhat modest (this isn't to say that they don't have an ego or that they can't act in arrogant ways, just that they tend to have a better idea about how much they don't know). Steve Schwartz has an essay called No One Knows What the F*** They’re Doing (or “The 3 Types of Knowledge”) that explores these ideas in some detail:
To really understand how it is that no one knows what they’re doing, we need to understand the three fundamental categories of information.

There’s the shit you know, the shit you know you don’t know, and the shit you don’t know you don’t know.
Schwartz has a series of very helpful charts that illustrate this, but most people drastically overestimate the amount of knowledge in the "shit you know" category. In fact, that's the smallest category and it is dwarfed b the shit you know you don’t know category, which is, in itself, dwarfed by the shit you don’t know you don’t know. The result is that most people who receive a lot of praise or recognition are surprised and feel a bit like a fraud.

This is hardly a new concept, but it's always worth keeping in mind. When we learn something new, we've gained some knowledge. We've put some information into the "shit we know" category. But more importantly, we've probably also taken something out of the "shit we don't know that we don't know" category and put it into the "shit we know that we don't know" category. This is important because that unknown unknowns category is the most dangerous of the categories, not the least of which is that our ignorance prevents us from really exploring it. As mentioned at the beginning of this post, our incompetence masks our ability to recognize our incompetence. In the interview, Morris references a short film he did once:
ERROL MORRIS: And I have an interview with the president of the Alcor Life Extension Foundation, a cryonics organization, on the 6 o’clock news in Riverside, California. One of the executives of the company had frozen his mother’s head for future resuscitation. (It’s called a “neuro,” as opposed to a “full-body” freezing.) The prosecutor claimed that they may not have waited for her to die. In answer to a reporter’s question, the president of the Alcor Life Extension Foundation said, “You know, we’re not stupid . . . ” And then corrected himself almost immediately, “We’re not that stupid that we would do something like that.”

DAVID DUNNING: That’s pretty good.

ERROL MORRIS: “Yes. We’re stupid, but we’re not that stupid.”

DAVID DUNNING: And in some sense we apply that to the human race. There’s some comfort in that. We may be stupid, but we’re not that stupid.
One might be tempted to call this a cynical outlook, but what it basically amounts to is that there's always something new to learn. Indeed, the more we learn, the more there is to learn. Now, if only we could invent the technology like what's presented in Diaspora (from my previous post), so we can live long enough to really learn a lot about the universe around us...
Posted by Mark on July 04, 2010 at 07:42 PM .: link :.


End of This Day's Posts

Wednesday, June 23, 2010

Internalizing the Ancient
Otaku Kun points to a wonderful entry in the Astronomy Picture of the Day series:
APOD: Milky Way Over Ancient Ghost Panel
The photo features two main elements: a nice view of the stars in the sky and a series of paintings on a canyon wall in Utah (it's the angle of the photograph and the clarity of the sky that makes it seem unreal to me, but looking at the larger version makes things a bit more clear). As OK points out, there are two corresponding kinds of antiquity here: "one cosmic, the other human". He speculates:
I think it’s impossible to really relate to things beyond human timescales. The idea of something being “ancient” has no meaning if it predates our human comprehension. The Neanderthals disappeared 30,000 years ago, which is probably really the farthest back we can reflect on. When we start talking about human forebears of 100,000 years ago and more, it becomes more abstract - that’s why it’s no coincidence that the Battlestar Galactica series finale set the events 150,000 years ago, well beyond even the reach of mythological narrative.
I'm reminded of an essay by C. Northcote Parkinson, called High Finance or The Point of Vanishing Interest (the essay appears in Parkinson's Law, a collection of essays). Parkinson writes about how finance committees work:
People who understand high finance are of two kinds: those who have vast fortunes of their own and those who have nothing at all. To the actual millionaire a million dollars is something real and comprehensible. To the applied mathematician and the lecturer in economics (assuming both to be practically starving) a million dollars is at least as real as a thousand, they having never possessed either sum. But the world is full of people who fall between these two categories, knowing nothing of millions but well accustomed to think in thousands, and it is these that finance committees are mostly comprised.
He then goes on to explore what he calls the "Law of Triviality". Briefly stated, it means that the time spent on any item of the agenda will be in inverse proportion to the sum involved. Thus he concludes, after a number of humorous but fitting examples, that there is a point of vanishing interest where the committee can no longer comment with authority. Astonishingly, the amount of time that is spent on $10 million and on $10 may well be the same. There is clearly a space of time which suffices equally for the largest and smallest sums.

In short, it's difficult to internalize numbers that high, whether we're talking about large sums of money or cosmic timescales. Indeed, I'd even say that Parkinson was being a bit optimistic. Millionaires and mathematicians may have a better grasp on the situation than most, but even they are probably at a loss when we start talking about cosmic timeframes. OK also mentions Battlestar Galactica, which did end on an interesting note (even if that finale was quite disappointing as a whole) and which brings me to one of the reasons I really enjoy science fiction: the contemplation of concepts and ideas that are beyond comprehension. I can't really internalize the cosmic information encoded in the universe around me in such a way to do anything useful with it, but I can contemplate it and struggle to understand it, which is interesting and valuable in its own right. Perhaps someday, we will be able to devise ways to internalize and process information on a cosmic scale (this sort of optimistic statement perhaps represents another reason I enjoy SF).
Posted by Mark on June 23, 2010 at 08:30 PM .: link :.


End of This Day's Posts

Sunday, May 30, 2010

Predictions
Someone sent me a note about a post I wrote on the 4th Kingdom boards in 2005 (August 3, 2005, to be more precise). It was in a response to a thread about technology and consumer electronics trends, and the original poster gave two examples that were exploding at the times: "camera phones and iPods." This is what I wrote in response:
Heh, I think the next big thing will be the iPod camera phone. Or, on a more general level, mp3 player phones. There are already some nifty looking mp3 phones, most notably the Sony/Ericsson "Walkman" branded phones (most of which are not available here just yet). Current models are all based on flash memory, but it can't be long before someone releases something with a small hard drive (a la the iPod). I suspect that, in about a year, I'll be able to hit 3 birds with one stone and buy a new cell phone with an mp3 player and digital camera.

As for other trends, as you mention, I think we're goint to see a lot of hoopla about the next gen gaming consoles. The new Xbox comes out in time for Xmas this year and the new Playstation 3 hits early next year. The new playstation will probably have blue-ray DVD capability, which brings up another coming tech trend: the high capacity DVD war! It seems that Sony may actually be able to pull this one out (unlike Betamax), but I guess we'll have to wait and see...
For an off-the-cuff informal response, I think I did pretty well. Of course, I still got a lot of the specifics wrong. For instance, I'm pretty clearly talking about the iPhone, though that would have to wait about 2 years before it became a reality. I also didn't anticipate the expansion of flash memory to more usable sizes and prices. Though I was clearly talking about a convergence device, I didn't really say anything about what we now call "apps".

In terms of game consoles, I didn't really say much. My first thought upon reading this post was that I had completely missed the boat on the Wii, however, it appears that the Wii's new controller scheme wasn't shown until September 2005 (about a month after my post). I did manage to predict a winner in the HD video war though, even if I framed the prediction as a "high capacity DVD war" and spelled blu-ray wrong.

I'm not generally good at making predictions about this sort of thing, but it's nice to see when I do get things right. Of course, you could make the argument that I was just stating the obvious (which is basically what I did with my 2008 predictions). Then again, everything seems obvious in hindsight, so perhaps it is still a worthwhile exercise for me. If nothing else, it gets me to think in ways I'm not really used to... so here are a few predictions for the rest of this year:
  • Microsoft will release Natal this year, and it will be a massive failure. There will be a lot of neat talk about it and speculation about the future, but the fact is that gesture based interfaces and voice controls aren't especially great. I'll bet everyone says they'd like to use the Minority Report interface... but once they get to use it, I doubt people would actually find it more useful than current input methods. If it does attain success though, it will be because of the novelty of that sort of interaction. As a gaming platform, I think it will be a near total bust. The only way Microsoft would get Natal into homes is to bundle it with the XBox 360 (without raising the price)
  • Speaking of which, I think Sony's Playstation Move platform will be mildly more successful than Natal, which is to say that it will also be a failure. I don't see anything in their initial slate of games that makes me even want to try it out. All that being said, the PS3 will continue to gain ground against the Xbox 360, though not so much that it will overtake the other console.
  • While I'm at it, I might as well go out on a limb and say that the Wii will clobber both the PS3 and the Xbox 360. As of right now, their year in games seems relatively tame, so I don't see the Wii producing favorable year over year numbers (especially since I don't think they'll be able to replicate the success of New Super Mario Brothers Wii, which is selling obscenely well, even to this day). The one wildcard on the Wii right now is the Vitality Sensor. If Nintendo is able to put out the right software for that and if they're able to market it well, it could be a massive, audience-shifting blue ocean win for them. Coming up with a good "relaxation" game and marketing it to the proper audience is one hell of a challenge though. On the other hand, if anyone can pull that off, it's Nintendo.
  • Sony will also release some sort of 3D gaming and movie functionality for the home. It will also be a failure. In general, I think attitudes towards 3D are declining. I think it will take a high profile failure to really temper Hollywood's enthusiasm (and even then, the "3D bump" of sales seems to outweigh the risk in most cases). Nevertheless, I don't think 3D is here to stay. The next major 3D revolution will be when it becomes possible to do it without glasses (which, at that point, might be a completely different technology like holograms or something).
  • At first, I was going to predict that Hollywood would be seeing a dip in ticket sales, until I realized that Avatar was mostly a 2010 phenomenon, and that Alice in Wonderland has made about $1 billion worldwide already. Furthermore, this summer sees the release of The Twilight Saga: Eclipse, which could reach similar heights (for reference, New Moon did $700 million worldwide) and the next Harry Potter is coming in November (for reference, the last Potter film did around $930 million). Altogether, the film world seems to be doing well... in terms of sales. I have to say that from my perspective, things are not looking especially well when it comes to quality. I'm not even as interested in seeing a lot of the movies released so far this year (an informal look at my past few years indicates that I've normally seen about twice as many movies as I have this year - though part of that is due to the move of the Philly film fest to October).
  • I suppose I should also make some Apple predictions. The iPhone will continue to grow at a fast rate, though its growth will be tempered by Android phones. Right now, both of them are eviscerating the rest of the phone market. Once that is complete, we'll be left with a few relatively equal players, and I think that will lead to good options for us consumers. The iPhone has been taken to task more and more for Apple's control-freakism, but it's interesting that Android's open features are going to present more and more of a challenge to that as time goes on. Most recently, Google announced that the latest version of Android would feature the ability for your 3G/4G phone to act as a WiFi hotspot, which will most likely force Apple to do the same (apparently if you want to do this today, you have to jailbreak your iPhone). I don't think this spells the end of the iPhone anytime soon, but it does mean that they have some legitimate competition (and that competition is already challenging Apple with its feature-set, which is promising).
  • The iPad will continue to have modest success. Apple may be able to convert that to a huge success if they are able to bring down the price and iron out some of the software kinks (like multi-tasking, etc... something we already know is coming). The iPad has the potential to destroy the netbook market. Again, the biggest obstacle at this point is the price.
  • The Republicans will win more seats in the 2010 elections than the Democrats. I haven't looked close enough at the numbers to say whether or not they could take back either (or both) house of Congress, but they will gain ground. This is not a statement of political preference either way for me, and my reasons for making this prediction are less about ideology than simple voter disenfranchisement. People aren't happy with the government and that will manifest as votes against the incumbents. It's too far away from the 2012 elections to be sure, but I suspect Obama will hang on, if for no other reason than that he seems to be charismatic enough that people give him a pass on various mistakes or other bad news.
And I think that's good enough for now. In other news, I have started a couple of posts that are significantly more substantial than what I've been posting lately. Unfortunately, they're taking a while to produce, but at least there's some interesting stuff in the works.
Posted by Mark on May 30, 2010 at 09:00 PM .: link :.


End of This Day's Posts

Wednesday, March 10, 2010

Blast from the Past
A coworker recently unearthed a stash of a publication called The Net, a magazine published circa 1997. It's been an interesting trip down memory lane. In no particular order, here are some thoughts about this now defunct magazine.
  • Website: There was a website, using the oh-so-memorable URL of www.thenet-usa.com (I suppose they were trying to distinguish themselves from all the other countries with thenet websites). Naturally, the website is no longer available, but archive.org has a nice selection of valid content from the 96-97 era. It certainly wasn't the worst website in the world, but it's not exactly great either. Just to give you a taste - for a while, it apparently used frames. Judging by archive.org, the site apparently went on until at least February of 2000, but the domain apparently lapsed sometime around May of that year. Random clicking around the dates after 2000 yielded some interesting results. Apparently someone named Phil Viger used it as their personal webpage for a while, complete with MIDI files (judging from his footer, he was someone who bought up a lot of URLs and put his simple page on there as a placeholder). By 2006, the site lapsed again, and it has remained vacant since then.
  • Imagez: One other fun thing about the website is that their image directory was called "imagez" (i.e. http://web.archive.org/web/19970701135348/www.thenet-usa.com/imagez/menubar/menu.gif). They thought they were so hip in the 90s. Of course, 10 years from now, some dufus will be writing a post very much like this and wondering why there's an "r" at the end of flickr.
  • Headlines: Some headlines from the magazine:
    • Top Secrets of the Webmaster Elite (And as if that weren't enough, we get the subhead: Warning: This information could create dangerously powerful Web Sites)
    • Are the Browser Wars Over? - Interestingly, the issue I'm looking at was from February 1997, meaning that IE and NN were still on their 3.x iterations. More on this story below
    • Unlock the Secrets of the Search Engines - Particularly notable in that this magazine was published before google. Remember Excite (apparently, they're still around - who knew)?
    I could go on and on. Just pick up a magazine, open to a random page, and you can observe something very dated or featuring a horrible pun (like Global Warning... get it? Instead of Global Warming, he's saying Global Warning! He's so clever!)
  • Browser Wars: With the impending release of IE4 and Netscape Communicator Suite, everyone thought that web browsers were going to go away, or be consumed by the OS. One of the regular features of the magazine is to ask a panel of experts a simple question, such as "Are Web Browsers an endangered species?" Some of the answers are ridiculous, like this one:
    The Web browser (content) and the desktop itself (functions) will all be integrated into our e-mail packages (communications).
    There is, perhaps, a nugget of truth there, but it certainly didn't happen that way. Still, the line between browser, desktop, and email client is shifting, this guy just picked the wrong central application. Speaking of which, this is another interesting answer:
    The desktop will give way to the webtop. You will hardly notice where the Web begins and your documents end.
    Is it me, or is this guy describing Chrome OS? This guy's answer and a lot of the others are obviously written with 90s terminology, but describing things that are happening today. For instance, the notion of desktop widgets (or gadgets or screenlets or whatever you call them) is mentioned multiple times, but not with our terminology.
  • Holy shit, remember VRML?
  • Pre-Google Silliness: "A search engine for searching search engines? Sure why not?" Later in the same issue, I saw an ad for a program that would automatically search multiple search engines and provide you with a consolidated list of results... for only $70!
  • Standards: This one's right on the money: "HTML will still be the standard everyone loves to hate." Of course, the author goes on to speculate that java applets will rule the day, so it's not exactly prescient.
  • The Psychic: In one of my favorite discoveries, the magazine pitted The Suit Versus the Psychic. Of course, the suit gives relatively boring answers to the questions, but the Psychic, he's awesome. Regarding NN vs IE, he says "I foresee Netscape over Microsoft's IE for 1997. Netscape is cleaner on an energy level. It appears to me to be more flexible and intuitive. IE has lower energy. I see encumbrances all around it." Nice! Regarding IPOs, our clairvoyant friend had this to say "I predict IPOs continuing to struggle throughout 1997. I don't know anything about them on this level, but that just came to me." Hey, at least he's honest. Right?
Honestly, I'm not sure I'm even doing this justice. I need to read through more of these magazines. Perhaps another post is forthcoming...
Posted by Mark on March 10, 2010 at 07:19 PM .: link :.


End of This Day's Posts

Wednesday, December 30, 2009

More on Visual Literacy
In response to my post on Visual Literacy and Rembrandt's J'accuse, long-time Kaedrin friend Roy made some interesting comments about director Peter Greenaway's insistence that our ability to analyze visual art forms like paintings is ill-informed and impoverished.
It depends on what you mean by visually illiterate, I guess. Because I think that the majority of people are as visually literate as they are textually literate. What you seem to be comparing is the ability to read into a painting with the ability to read words, but that's not just reading, you're talking about analyzing and deconstructing at that point. I mean, most people can watch a movie or look at a picture and do some basic contextualizing. ... It's not for lack of literacy, it's for lack of training. You know how it is... there's reading, and then there's Reading. Most people in the United States know how to read, but that doesn't mean that they know how to Read. Likewise with visual materials--most people know how to view a painting, they just don't know how to View a Painting. I don't think we're visually illiterate morons, I just think we're only superficially trained.
I mostly agree with Roy, and I spent most of my post critiquing Greenaway's film for similar reasons. However, I find the subject of visual literacy interesting. First, as Roy mentions, it depends on how you define the phrase. When we hear the term literacy, we usually mean the ability to read and write, but there's also a more general definition of being educated or having knowledge within a particular subject or field (i.e. computer literacy or in our case, visual literacy). Greenaway is clearly emphasizing the more general definition. It's not that he thinks we can't see a painting, it's that we don't know enough about the context of the paintings we are viewing.

Roy is correct to point out that most people actually do have relatively sophisticated visual skills:
Even when people don't have the vocabulary or training, they still pick up on things, because I think we use symbols and visual language all the time. We read expressions and body language really well, for example. Almost all of our driving rules are encoded first and foremost as symbols, not words--red=stop, green=go, yellow=caution. You don't need "Stop" or "Yield" on the sign to know which it is--the shape of the sign tells you.
Those are great examples of visual encoding and conventions, but do they represent literacy? Why does a stop sign represent what it does? There are three main components to the stop sign: Stop
  1. Text - It literally says "Stop" on the sign. However, this is not universal. In Israel, for instance, there is no text. In it's place is an image of a hand in a "stop" gesture.
  2. Shape - The octagonal shape of the sign is unique, and so the sign is identifiable even if obscured. The shape also allows drivers facing the back of the sign to identify that oncoming drivers have a stop sign...
  3. Color - The sign is red, a "hot" color that stands out more than most colors. Blood and fire are red, and red is associated with sin, guilt, passion, and anger, among many other things. As such, red is often used to represent warnings, hence it's frequent use in traffic signals such as the stop sign.
Interestingly, these different components are overlapping and reinforcing. If one fails (for someone who is color-blind or someone who can't read, for example), another can still communicate the meaning of the sign. There's something similar going on with traffic lights, as the position of the light is just as important (if not more important) than the color of the light.

However, it's worth noting that the clear meaning of a stop sign is also due to the fact that it's a near universal convention used throughout the entire world. Not all traffic signals are as well defined. Case in point, what does a blinking green traffic light represent? Blinking red means to "stop, then proceed with caution" (kinda like a stop sign). Blinking yellow means to "slow down and proceed with caution." So what does a blinking green mean? James Grimmelmann tried to figure it out:
It turns out (courtesy of the ODP and rec.travel), perhaps unsurpsingly, that there is no uniform agreement on the meaning of a blinking green light. In a bunch of Canadian provinces, it has the same general meaning that a regular green light does, with the added modifier that you are the undisputed master of all you survey. All other traffic entering the intersection has a stop sign or a red light, and must bow down before your awesome cosmic powers. On the other hand, if you're in Massachusetts or British Columbia and you try a no-look Ontario-style left turn on a blinking green, you're liable to get into a smackup, since the blinking green means only that cross traffic is seeing red, with no guarantees about oncoming traffic.
Now, maybe it's just because we're starting to get obscure and complicated here, but the reason traffic signals work is because we've established a set of conventions that are similar most everywhere. But when we mess around with them or get too complicated, it could be a problem. Luckly, we don't do that sort of thing very often (even the blinking green example is probably vanishingly obscure - I've never seen or even heard of that happening until reading James' post). These conventions are learned, usually through simple observation, though we also regulate who can drive and require people to study the rules of driving (including signs and lights) before granting a license.

Another example, perhaps surprising because it is something primarily thought of as a textual medium, is newspapers. Take a look at this front page of a newspaper1 :

The Onion Newspaper

Newspapers use numerous techniques (such as prominence, grouping, and nesting) to establish a visual hierarchy, allowing readers to scan the page to find what stories they want to read. In the image above, the size of the headline (Victory!) as well as its placement on the page makes it clear at a glance that this is the most important story. The headline "Miami Police Department Unveils New Pastel Pink and Aqua Uniforms" spans three columns of text, making it obvious that they're all part of the same story. Furthermore, we know the picture of Crockett and Tubbs goes with the same story because both the picture and the text are spanned by the same headline. And so on.

Now I know what my younger readers2 are thinking: What the fuck is this "newspaper" thing you're babbling about? Well, it turns out that a lot of the same conventions apply to the web. There are, of course, new conventions on the web (for instance, links are usually represented by different colored text that is also underlined), but many of the same techniques are used to establish a visual hierarchy on the web.

What's more interesting about newspapers and the web is that we aren't really trained how to read them, but we figure it out anyway. In his excellent book on usability, Don't Make Me Think, Steve Krug writes:
At some point in our youth, without ever being taught, we all learned to read a newspaper. Not the words, but the conventions.

We learned, for instance, that a phrase in very large type is usually a headline that summarizes the story underneath it, and that the text underneath a picture is either a caption that tells me what it's a picture of, or - if it's in very small type - a photo credit that tells me who took the picture.

We learned that knowing the various conventions of page layout and formatting made it easier and faster to scan a newspaper and find the stories we were interested in. And when we started traveling to other cities, we learned that all newspapers used the same conventions (with slight variations), so knowing the conventions made it easy to read any newspaper.
The tricky part about this is that the learning seems to happen subconsciously. Large type is pretty obvious, but column spanning? Captions? Nesting? Some of this stuff gets pretty subtle, and for the most part, people don't care. They just scan the page, find what they want, and read the story. It's just intuitive.

But designing a layout is not quite as intuitive. Many of the lessons we have internalized in reading a newspaper (or a website) aren't really available to us in a situation where we're asked to design a layout. If you want a good example of this, look at web pages designed in the mid-90s. By now, we've got blogs and mini-CMS style systems that automate layouts and take design out of most people's hands.

So, does Greenaway have a valid point? Or is Roy right? Obviously, we all process visual information, and visual symbolism is frequently used to encode large amounts of information into a relatively small space. Does that make us visually literate? I guess it all comes down to your definition of literate. Roy seems to take the more specific definition of "able to read or write" while Greenaway seems to be more concerned with "education or knowledge in a specified field." The question then becomes, are we more textually literate than we are visually literate? Greenaway certainly seems to think so. Roy seems to think we're just about equal on both fronts. I think both positions are defensible, especially when you consider that Greenaway is talking specifically about art. Furthermore, his movie is about a classical painting that was created several centuries ago. For most young people today, art is more diffuse. When you think about it, almost anything can be art. I suspect Greenaway would be disgusted by that sort of attitude, which is perhaps another way to view his thoughts on visual literacy.

1 - Yeah, it's the Onion and not a real newspaper per say, but it's fun and it's representative of common newspaper conventions.

2 - Hahaha, as if I have more than 5 readers, let alone any young readers.
Posted by Mark on December 30, 2009 at 07:13 PM .: link :.


End of This Day's Posts

Sunday, June 28, 2009

Interrupts and Context Switching
To drastically simplify how computers work, you could say that computers do nothing more that shuffle bits (i.e. 1s and 0s) around. All computer data is based on these binary digits, which are represented in computers as voltages (5 V for a 1 and 0 V for a 0), and these voltages are physically manipulated through transistors, circuits, etc... When you get into the guts of a computer and start looking at how they work, it seems amazing how many operations it takes to do something simple, like addition or multiplication. Of course, computers have gotten a lot smaller and thus a lot faster, to the point where they can perform millions of these operations per second, so it still feels fast. The processor is performing these operations in a serial fashion - basically a single-file line of operations.

This single-file line could be quite inefficent and there are times when you want a computer to be processing many different things at once, rather than one thing at a time. For example, most computers rely on peripherals for input, but those peripherals are often much slower than the processor itself. For instance, when a program needs some data, it may have to read that data from the hard drive first. This may only take a few milliseconds, but the CPU would be idle during that time - quite inefficient. To improve efficiency, computers use multitasking. A CPU can still only be running one process at a time, but multitasking gets around that by scheduling which tasks will be running at any given time. The act of switching from one task to another is called Context Switching. Ironically, the act of context switching adds a fair amount of overhead to the computing process. To ensure that the original running program does not lose all its progress, the computer must first save the current state of the CPU in memory before switching to the new program. Later, when switching back to the original, the computer must load the state of the CPU from memory. Fortunately, this overhead is often offset by the efficiency gained with frequent context switches.

If you can do context switches frequently enough, the computer appears to be doing many things at once (even though the CPU is only processing a single task at any given time). Signaling the CPU to do a context switch is often accomplished with the use of a command called an Interrupt. For the most part, the computers we're all using are Interrupt driven, meaning that running processes are often interrupted by higher-priority requests, forcing context switches.

This might sound tedious to us, but computers are excellent at this sort of processing. They will do millions of operations per second, and generally have no problem switching from one program to the other and back again. The way software is written can be an issue, but the core functions of the computer described above happen in a very reliable way. Of course, there are physical limits to what can be done with serial computing - we can't change the speed of light or the size of atoms or a number of other physical constraints, and so performance cannot continue to improve indefinitely. The big challenge for computers in the near future will be to figure out how to use parallel computing as well as we now use serial computing. Hence all the talk about Multi-core processing (most commonly used with 2 or 4 cores).

Parallel computing can do many things which are far beyond our current technological capabilities. For a perfect example of this, look no further than the human brain. The neurons in our brain are incredibly slow when compared to computer processor speeds, yet we can rapidly do things which are far beyond the abilities of the biggest and most complex computers in existance. The reason for that is that there are truly massive numbers of neurons in our brain, and they're all operating in parallel. Furthermore, their configuration appears to be in flux, frequently changing and adapting to various stimuli. This part is key, as it's not so much the number of neurons we have as how they're organized that matters. In mammals, brain size roughly correlates with the size of the body. Big animals generally have larger brains than small animals, but that doesn't mean they're proportionally more intelligent. An elephant's brain is much larger than a human's brain, but they're obviously much less intelligent than humans.

Of course, we know very little about the details of how our brains work (and I'm not an expert), but it seems clear that brain size or neuron count are not as important as how neurons are organized and crosslinked. The human brain has a huge number of neurons (somewhere on the order of one hundred billion), and each individual neuron is connected to several thousand other neurons (leading to a total number of connections in the hundreds of trillions). Technically, neurons are "digital" in that if you were to take a snapshot of the brain at a given instant, each neuron would be either "on" or "off" (i.e. a 1 or a 0). However, neurons don't work like digital electronics. When a neuron fires, it doesn't just turn on, it pulses. What's more, each neuron is accepting input from and providing output to thousands of other neurons. Each connection has a different priority or weight, so that some connections are more powerful or influential than others. Again, these connections and their relative influence tends to be in flux, constantly changing to meet new needs.

This turns out to be a good thing in that it gives us the capability to be creative and solve problems, to be unpredictable - things humans cherish and that computers can't really do on their own.

However, this all comes with its own set of tradeoffs. With respect to this post, the most relevant of which is that humans aren't particularly good at doing context switches. Our brains are actually great at processing a lot of information in parallel. Much of it is subconscious - heart pumping, breathing, processing sensory input, etc... Those are also things that we never really cease doing (while we're alive, at least), so those resources are pretty much always in use. But because of the way our neurons are interconnected, sometimes those resources trigger other processing. For instance, if you see something familiar, that sensory input might trigger memories of childhood (or whatever).

In a computer, everything is happening in serial and thus it is easy to predict how various inputs will impact the system. What's more, when a computer stores its CPU's current state in memory, that state can be restored later with perfect accuracy. Because of the interconnected and parallel nature of the brain, doing this sort of context switching is much more difficult. Again, we know very little about how the humain brain really works, but it seems clear that there is short-term and long-term memory, and that the process of transferring data from short-term memory to long-term memory is lossy. A big part of what the brain does seems to be filtering data, determining what is important and what is not. For instance, studies have shown that people who do well on memory tests don't necessarily have a more effective memory system, they're just better at ignoring unimportant things. In any case, human memory is infamously unreliable, so doing a context switch introduces a lot of thrash in what you were originally doing because you will have to do a lot of duplicate work to get yourself back to your original state (something a computer has a much easier time doing). When you're working on something specific, you're dedicating a significant portion of your conscious brainpower towards that task. In otherwords, you're probably engaging millions if not billions of neurons in the task. When you consider that each of these is interconnected and working in parallel, you start to get an idea of how complex it would be to reconfigure the whole thing for a new task. In a computer, you need to ensure the current state of a single CPU is saved. Your brain, on the other hand, has a much tougher job, and its memory isn't quite as reliable as a computer's memory. I like to refer to this as metal inertia. This sort of issue manifests itself in many different ways.

One thing I've found is that it can be very difficult to get started on a project, but once I get going, it becomes much easier to remain focused and get a lot accomplished. But getting started can be a problem for me, and finding a few uninterrupted hours to delve into something can be difficult as well. One of my favorite essays on the subject was written by Joel Spolsky - its called Fire and Motion. A quick excerpt:
Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I've got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don't realize that it's already 7:30 pm.

Somewhere between step 8 and step 9 there seems to be a bug, because I can't always make it across that chasm. For me, just getting started is the only hard thing. An object at rest tends to remain at rest. There's something incredible heavy in my brain that is extremely hard to get up to speed, but once it's rolling at full speed, it takes no effort to keep it going.
I've found this sort of mental inertia to be quite common, and it turns out that there are several areas of study based around this concept. The state of thought where your brain is up to speed and humming along is often referred to as "flow" or being "in the zone." This is particularly important for working on things that require a lot of concentration and attention, such as computer programming or complex writing.

From my own personal experience a couple of years ago during a particularly demanding project, I found that my most productive hours were actually after 6 pm. Why? Because there were no interruptions or distractions, and a two hour chunk of uninterrupted time allowed me to get a lot of work done. Anecdotal evidence suggests that others have had similar experiences. Many people come into work very early in the hopes that they will be able to get more done because no one else is here (and complain when people are here that early). Indeed, a lot of productivity suggestions basically amount to carving out a large chunk of time and finding a quiet place to do your work.

A key component of flow is finding a large, uninterrupted chunk of time in which to work. It's also something that can be difficult to do here at a lot of workplaces. Mine is a 24/7 company, and the nature of our business requires frequent interruptions and thus many of us are in a near constant state of context switching. Between phone calls, emails, and instant messaging, we're sure to be interrupted many times an hour if we're constantly keeping up with them. What's more, some of those interruptions will be high priority and require immediate attention. Plus, many of us have large amounts of meetings on our calendars which only makes it more difficult to concentrate on something important.

Tell me if this sounds familiar: You wake up early and during your morning routine, you plan out what you need to get done at work today. Let's say you figure you can get 4 tasks done during the day. Then you arrive at work to find 3 voice messages and around a hundred emails and by the end of the day, you've accomplished about 15 tasks, none of which are the 4 you had originally planned to do. I think this happens more often than we care to admit.

Another example, if it's 2:40 pm and I know I have a meeting at 3 pm - should I start working on a task I know will take me 3 solid hours or so to complete? Probably not. I might be able to get started and make some progress, but as soon my brain starts firing on all cylinders, I'll have to stop working and head to the meeting. Even if I did get something accomplished during those 20 minutes, chances are when I get back to my desk to get started again, I'm going to have to refamiliarize myself with the project and what I had already done before proceeding.

Of course, none of what I'm saying here is especially new, but in today's world it can be useful to remind ourselves that we don't need to always be connected or constantly monitoring emails, RSS, facebook, twitter, etc... Those things are excellent ways to keep in touch with friends or stay on top of a given topic, but they tend to split attention in many different directions. It's funny, when you look at a lot of attempts to increase productivity, efforts tend to focus on managing time. While important, we might also want to spend some time figuring out how we manage our attention (and the things that interrupt it).

(Note: As long and ponderous as this post is, it's actually part of a larger series of posts I have planned. Some parts of the series will not be posted here, as they will be tailored towards the specifics of my workplace, but in the interest of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Also, given the nature of this post, it makes sense to pursue interests in my personal life that could be repurposed in my professional life (and vice/versa).)
Posted by Mark on June 28, 2009 at 03:44 PM .: link :.


End of This Day's Posts

Wednesday, February 04, 2009

Nerdy
I've always considered myself something of a nerd, even back when being nerdy wasn't cool. Nowadays, everyone thinks they're a nerd. MGK recently noticed this:
Recently, I was surfing the net looking for lols, and came across a personal ad on Craigslist. The ad was not in and of itself hilarious, but one thing struck me. The writer described herself as “nerdy,” and as an example of her nerdiness, explained that she loved to watch Desperate Housewives.

My god, people, have we allowed “nerdy” to be defined down so greatly that watching Desperate Housewives - a top 20 Neilsen primetime soap opera with no actual nerd content per se - qualifies as “nerdy” now? That is just wrong. The nerdular act cannot be allowed to be so mainstream.
To address this situation, he has devised "a handy guide for people to define their own nerdiness, based on a number of nerdistic passions." I'm a little surprised at how poorly I did in some of these categories.
  • Batman - Not Nerdy. When I think about it, it's not that surprising. After all, I have never read any of the comic books, not even Year One or The Dark Knight Returns, which MGK specifically calls out later in his creteria as not being particularly nerdy. That said, I wonder how watching The Dark Knight 5 times (three times in the theater) in less than a year qualifies.
  • Star Wars - Slightly Nerdy. Now this one is surprising. Sure, according to this guide, I'm nerdier about Star Wars than I am about Batman, but only a little. I suppose if he had loosened the criteria or chose a different random fact for the "nerdy" level, I could easily have reached that level, for I have had some experience with the “expanded universe” Star Wars novels. One other gripe is that no self-respecting nerd would defend the idea of Jar Jar Binks!
  • Harry Potter - Somwhere between Not Nerdy and Slightly Nerdy. I didn't particularly love Harry Potter and the Order of the Phoenix, and my dislike may disqualify me from the Slightly Nerdy level. On the other hand, I didn't particularly hate the novel either, and I had no problem blowing through it rather quickly.
  • Magic: The Gathering - Slightly Nerdy. I have to say that I didn't play this game that much, but I really did enjoy it when I did. But it got way too complicated later on, and some people took it wayyy to seriously.
  • H.P. Lovecraft - Dangerously Nerdy. Finally! Though I have to admit that I don't qualify for three of the lesser levels... However, I have read several of his stories, which is apparently dangerously nerdy.
  • Nerd Television - Dangerously Nerdy. Totally. The two shows I haven't watched much of are the lowest ranked ones. I've seen a significant portion of the other ones, including The Adventures of Brisco County Jr. (at this point, even recognizing what Brisco County Jr. is, is probably nerdworthy).
  • Star Trek - I think I might be Fairly Nerdy here, otherwise I'm Not Nerdy. It's just that I don't actually remember which one Picard rode the dune buggy in. That probably disqualifies me. I do love TNG though. Could never get into any of the other spinoffs.
  • Computer Use - Nerdy. Potentially Really Nerdy, but there are definitely a couple of coding jokes in XKCD that I haven't gotten (but I get a pretty good portion of them).
Again, I am a bit surprised at how non-nerdy I am. I mean, aside from a couple of dangerously nerdy subjects, I'm not very nerdy at all. How did you do?
Posted by Mark on February 04, 2009 at 10:45 PM .: link :.


End of This Day's Posts

Sunday, January 04, 2009

The PS3, Revisiting Predictions & Other Odds & Ends
The PS3 came yesterday, so I've spent most of the time since then in a Blu-Ray and Video Game induced haze. I was lured out by my brother this afternoon to watch the Eagles playoff game (we won!) and maybe feed myself too. While I'm out, I figure I should at least make some pretense at updating the blog with something...
  • Might as well get this out of the way first: The PS3 is actually pretty great. At this point, I've spent most of my time playing Assassin's Creed, which is great so far (though my understanding is that it gets repetitive and that's certainly something I'm starting to see...). I also watched the Final Cut of Blade Runner. The set I got comes with 3 other versions of the movie and like 15 hours of extras (these are in standard definition though), including an almost 4 hour in-depth documentary on the production. I also got Resistance, Call of Duty 4, and The Dark Knight, but have yet to fiddle around with those. The PS3 online system seems decent, though I haven't really done anything with it just yet. All in all, I'm very satisfied with my purchase so far.
  • Last January, I made 5 predictions for 2008, and it turns out that I was mostly correct! Neal Stephenson did announce a new novel (which I thoroughly enjoyed), but I was wrong about the setting (though I admitted that possiblity in my prediction). The WGA strike did end, and for the most part, TV didn't recover much of what they lost. There were few new shows that did well and big ratings drops for existing hits like Heroes. Box Office numbers were a bit skewed by The Dark Knight and Iron Man, but admissions were down (on the other hand, they were only down 4%, which isn't bad when compared to the rest of the economy). I predicted Blu-Ray would pick up ground, but not that Blu-Ray would win so decisively and so early. My DRM prediction seems rather stale - not much has changed in either the music industry or the movie industry. And Barack Obama did win the election. So overall, I'd say 4 out of 5 wasn't bad... but that's probably more because I didn't really go out on a limb with any of my predictions! Not sure if I'll be making any predictions for 2009, but you never know...
  • As I have for the past two years, I'm going to do another Kaedrin Movie Awards series of posts for 2008. As I've mentioned before, 2008 hasn't been a particularly great year (perhaps still feeling the effects of the writer's strike?), so I'm still trying to catch up with some films in order to compile my lists. if you have any nominations for the standard awards (see last year for an example) or any arbitrary awards you'd like to see, feel free to leave some comments or send me an email...
That's all for now. I believe I have some evil people to assassinate. Or perhaps I should repel an alien invasion. Or maybe I should just watch The Dark Knight again. Decisions, decisions...
Posted by Mark on January 04, 2009 at 08:33 PM .: link :.


End of This Day's Posts

Wednesday, December 17, 2008

12DC: Day 4 - Eggnog
A family tradition has grown over the past few years. Every Thanksgiving, we have an Eggnog tasting. Nothing fancy or scientific (though perhaps that can be arranged next year!) and we're pretty bad about organizing this. Point of fact, this year, we only had 4 varieties to try out. Last year, however, was a different story. Again, due to poor planning, several people brought several different varieties, which lead us to have 14 different brands of eggnog.

Eggnog!

For reference, these are the eggnogs pictured:
  • Turkey Hill
  • Southern Comfort (Traditional)
  • Southern Comfort (Vanilla Spice)
  • Organic Valley
  • Shop Rite
  • Hood (Sugar Cookie)
  • Hood (Pumpkin)
  • Hood (Gingerbread)
  • Hood (Cinnemon)
  • Axelrod
  • Wawa
  • Tuscan Dairy Farms
  • Soy Nog
  • Borden
Like I said, nothing particularly scientific or comprehensive about the process (heck, we don't even add alcohol), but a general consensus arose. First, "flavored" eggnogs (like Vanilla Spice or Pumpkin, etc...), while tasty and a nice change of pace, were generally considered to be out of the running for the prize. Second, Soy Nog was unanimously declared the worst egg nog evar. This may have had something to do with the fact that it didn't actually have egg in it, and thus isn't really eggnog, but still. And finally, the winner (not unanimous, but it scored a decisive victory), was actually Wawa brand eggnog. For those of you non-East-Coasters, Wawa is a popular convenience store (a la 7-Eleven, but better) and dairy farm, and their Eggnog is great.

Up until this event started, I'd never been much of a fan of eggnog. There's just something unappealing about a substance that is so scary-bad-for-you that you can only consume it for a limited period of the year. But I've grown into it and am looking forward to next year's tasting...
Posted by Mark on December 17, 2008 at 07:05 PM .: link :.


End of This Day's Posts

Tuesday, December 16, 2008

12DC: Day 3 - The Christmas Cactus
What do you use, a tree? Pfft!

Traditional Kaedrin Christmas Cactus

The traditional Kaedrin Christmas cactus strikes again. Also striking again, my poor photography skillz! More to come...
Posted by Mark on December 16, 2008 at 06:45 PM .: link :.


End of This Day's Posts

Wednesday, November 26, 2008

Geekout: Alien vs. Predator
A while ago, I ran accross this McSweeney's article that pit Alien vs. Predator in a series of unlikely events like Macramé and Lincoln-Douglas Debating. Long time readers will know that I am a fan of the Alien vs. Predator concept, though the recent films have been awful (Alien, Aliens, and Predator are some of my favorites movies though, and the original AvP comic book was fantastic). In any case, I couldn't resist discussing and debating some of the events listed out, and the result was a pretty amusing (and incredibly geeky) conversation.

The first event under question was Breakdancing. I had picked the Alien for this and thought it was the obvious choice. My friend Roy disagreed, noting:
I think you've failed to take into account the unique physiology of the alien. Those tubes on his back? The tail? Those are going to make dancing very difficult. No backspins for him. I think that the Predator's upper body strength will help him to pull of some awesome moves. And, he doesn't have big pipes or tubes coming up out of his back.
I have to admit that he had a point about the tubes on the Alien's back, but I still felt the Alien was the superior breakdancer. My response:
Point taken, but I still see the Alien having much more agility, thus giving them the ability to move more gracefully than the Predator while break dancing. While their backspins might be problematic, they do have that giant head which would enable them to perform some rather spectacular headstands and headspins. And while the tail could get in the way of a back-spin, it would also give them a valuable 5th pivot with which they could pull off all sorts of crazy moves. Back spins are an important part of break dancing, but there are no shortages of upper body, frontal, side, or sliding moves, and indeed, there seem to be more of those than back maneuvers. When you add in the Alien's unique physiology, you get something that would allow for all sorts of variations and indeed, even totally new moves. Really, I think the Alien would revolutionize the break dancing scene. The predator's upper-body streght would allow for some amazing handstand style moves, but in almost every other way they are less limber and agile than the alien or even most human break-dance experts. Indeed, the alien does not seem to have an absense of upper body strength, so it's not like that gives the Predator a decisive advantage (the way the alien's tail does). I suppose it's possible that not all Predators are as bulked up as the ones in the films, but there is no real evidence of that.
Personally, I still believe I'm right on that one. The next event that came into question was Competitive Hot-Dog Eating. My initial pick was Predator, mostly because of his larger mouth and mandibles (when you look closely, the Alien's mouth is actually quite small). Anyway, Roy had some comments about this pick as well:
Totally goes to alien. Aliens are always hungry. They do nothing but eat and kill. We don't even actually know that Predator's eat meat. They're probably a bunch of annoying vegans. ;P
Once again, I think Roy makes a fair point here, but it's ultimately unpersuasive. My response:
This makes more sense to me, though I do maintain that the Alien's multi-tiered mouth is still significantly smaller and thus represents a bottleneck during any sort of competitive eating contest. Yes, their activities are generally limited to eating, killing, building those crazy hives and reproducing, but I see that as just a further example of why they would not be good at competitive eating. Since that's all they do, they do not have to eat fast. It's hard to tell because the alien and it's motivations are so... alien... and unexplored. The Predators, on the other hand, clearly have some sort of civilization with technological capabilities well beyond our own. It stands to reason that they would have less time dedicated to eating, and thus would need to scarf down more in less time... which means they would be better suited towards competitive eating. Your point about vegan Predators is also taken, but what we know of their culture is that it is based primarily on hunting. While I'm sure there are vegan Predators, I think it's fair to speculate that a race of hunters values and prizes meat.
I thought that was pretty good, but someone else stepped in at this point to defend Roy, noting that:
We know they hunt, yes, but in the hunts we've seen they take trophies, not food. I have yet to see a predator field-dress an alien. I mean, hell, how much meat could be on something like that anyway? It's all chitin and sinew, not really a meal at all, and that's before we think about the effects upon the stomach lining of that acid blood (ulcers like you wouldn't believe!!). No, it's not fair to speculate on their eating habits by looking at their hunts. Their hunts are trophy kills, rites of passage, not a means for survival. Everything we've seen of their society, we haven't been given clue one about their eating habits.
This is certainly an interesting take on the matter. My response:
Interesting point, but I think it's reasonable to make some extrapolations based on their hunting culture. It's reasonable to assume that their hunts as portrayed in the movies are indeed trophy hunts and not a matter of survival or food. This makes sense on an additional level because they're hunting alien species and alien physiology may not react well with their digestive systems (as you mention, the alien would be particularly bad in that respect). However, it's also reasonable to assume that the reason for their hunting tradition is that they were required to do so in the evolution of their species. Yes, I'm extrapolating from human experiences here, but there are humans today who hunt purely for trophies. It's reasonable to assume that the reason the Predator race is so focused on hunting is that they were forced to do so on their home planet. Indeed, in such a case, the act of hunting could take on a more meaningful aspect because of symbolic or perhaps even spiritual reasons. The act of hunting clearly goes beyond survival for them, but it's reasonable to assume that it began as a simple survival technique on their home planet, and grew into a more meaningful practice as the race became more advanced.
This thread went on for a few more posts and ultimately resulted in a stalemate, as we really don't know enough about either culture to say for sure. I still think it's reasonable to say that the hunting culture of the Predators implies a history of hunting and meat-eating.

The next topic under debate was the Wet T-Shirt Contest, which I had originally given a tie. After all, for the most part, we see both the Alien and the Predator without their shirts on, so what's the point of a Wet T-Shirt Contest? However, someone interjected a brilliant point that totally convinced me that I was wrong; the Alien would undoubtedly win this event.
Wet T-shirt: Alien. Preddy has been noted on several occasions to be "one ugly motherfucker."
There is simply no arguing with that one.
Posted by Mark on November 26, 2008 at 11:32 PM .: link :.


End of This Day's Posts

Wednesday, September 24, 2008

The Moon
A few years ago, The Onion put out a book called Our Dumb Century. It was comprised of a series of newspaper front pages, one from each year. It was an interesting book, in part because of the events they chose to represent each year and also because The Onion writers are hilarious. The most brilliant entry in the book was from the 1969 edition of the paper:

Newspaper from 1969: Holy Shit, Man Walks on Fucking Moon

Utterly brilliant. You can't read it on that small copy, but there's a whole profanity-laden exchange between Houston and Tranquility Base that's also hysterically funny. As it turns out, The Onion folks went ahead and made a video, complete with archival footage and authentic sounding voices, beeps, static, etc... Incredibly funny. [video via Need Coffee]

Update: Weird, I tried to embed the video in this post, but when you click play it says it's no longer available... but if you go directly to youtube, you can get the video. I'm taking out the embedded video and putting in the link for now.
Posted by Mark on September 24, 2008 at 10:04 PM .: link :.


End of This Day's Posts

Wednesday, August 06, 2008

Keeper Leagues and Unexpected Consequences
It's not a secret that I'm a pretty geeky guy, especially when it comes to certain subjects (movies, SF, etc...). My friends are a different kind of geek though. They're sports geeks. Specifically, they love baseball. About 10 years ago, they started a fantasy baseball league. At the time, the various websites weren't that great, but as the years passed, things started to get more sophisticated... and the league became much more competitive. In true geek fashion, we started getting carried away with various aspects of the league. Every team owner is expected to issue faux-press releases (i.e. pretending to be the Associated Press and faux interviews, etc...) and the league wrote a Constitution. In its current incarnation, the Constitution is 11 pages long. Every year, owners propose amendments in accordance with Article VI of the Constitution, and if 2/3 of the league approves of the amendment, it is ratified and put in the Constitution.

A few years ago, we ratified an amendment that gave each owner "keeper rights." What this basically means is that you can keep three eligible members of your team for the next season. Here's an excerpt from Article IV of the MLF Constitution:
Article IV: Keeper Rights

4. A Keeper Right is defined as the opportunity for a MLF manager to retain the rights of a player for one season
4.1. A player is eligible to be kept if they meet the follow criteria
4.1.1. The player must be on your current MLF roster
4.1.2. The player must have been drafted no earlier than the fourth round of that year’s draft
4.1.3. The player has not been kept in the year prior
4.1.4. The player must have been on a MLF roster by the end of the last game of the MLF playoffs (the end of the MLB regular season)
The rules of keeper eligibility help keep things a little even, meaning that a team that wins the league one year won't necessarily have as big an advantage as anyone else in the next year. You can't keep a player indefinitely and since players drafted in the first three rounds are also ineligible, that ensures that the best players are still open to even the worst team in the following year's draft. And Article IV, section 3 featues an interesting twist: "Trading keeper rights is permitted."

Now, these rules were put into place for many reasons. Some people like the opportunity to take a chance on a young, developing player (in the hopes that they'll be able to keep them for a breakout year in the following season). Some people want to make sure the team has a solid core that can be built upon. And a host of other reasons. However, after three years of keeper rights, some unexpected consequences have presented themselves.

The biggest implication is that team owners who are not doing well will "sell" their keeper ineligible players for more keeper rights and keeper eligible players. Similarly, those who are doing well will "sell" their keeper rights in the hopes of strengthening their team for the playoffs. The reason I'm using scare quotes around the word "sell" is that what this really amounts to are fire sales. Top tier players will often be traded for near scraps because a team that has no hope of winning the league has no use for that top tier player, but they could use a keeper right to help build for the future.

Initially, there was a bit of a learning curve. How much value does a keeper right really have? In the first season, someone traded 3 keeper rights for Albert Pujols, a trade so lopsided that a new constitutional amendment was ratified (titled The Golden Shaft award, it is given to the player who made the worst trade of the season.) However, after a few years, things have changed. Keeper rights have become more valuable, and teams in contention will "mortgage their future" by trading keeper rights for players (this effectively means they can add top tier talent without losing anything that impacts them for the current season). Some people value keeper rights much more than others, and during this season's trade deadline, things got ridiculous.

During the last day before the trade deadline, there were 8 trades involving 36 players and 7 keeps. This is rather obscene. One owner traded his 3 keeps for 8 players (many top tier folks) and made another trade for 5 additional players. In effect, this person replaced most of his team in one day and became an instant league powerhouse (and he is my division rival as well!) Needless to say, this year's "Winter Meetings" will contain much discussion regarding how we can mitigate these fire sales. There are several options available to us:
  • Push the trade deadline up a month. Teams that know they are out of contention on July 31 (the current trade deadline, same as MLB) might not know as much in June.
  • Make two trade deadlines. One deadline for keeper rights to be traded, one for same keeper status to be traded. The strategy here is similar to pushing the trade deadline up.
  • No more keeper rights can be traded. Only players. This option would mean that teams looking to upgrade must give up players to get other players in return.
  • Extend players' keeper eligibility to 2 years. If this was the rule a lot of the players moved at this years deadline would have not been traded since they could have been kept for another year.
  • Expand on keeper system. Add farm system and extend the number of keeper rights per team. But again keeper rights can't be traded.
  • No more keeper rights period.
And I'm sure there are lots of other variants that aren't listed. There will be a heated debate over the winter about all available options, and I'm positive that the Amendments process will be quite interesting this year. On a personal level, I'm not sure where I'll fall. While some of this year's trades were absurd (8 players for 3 keeps is crazy), it wasn't totally unexpected. While it's never been this crazy, there are always a ton of trades right at the deadline. I don't see any way around this sort of volatility in a keeper league. Plus, I kinda like that our trade deadline is 10 times as exciting as Major League Baseball's trade deadline.
Posted by Mark on August 06, 2008 at 09:09 PM .: link :.


End of This Day's Posts

Wednesday, July 30, 2008

Predictions and Information Overload
I'm currently reading Arthur C. Clarke's novel, Childhood's End, and I found this passage funny:
...there are too many distractions and entertainments. Do you realize that every day something like five hundred hours of radio and TV pour out over the various channels? If you went without sleep and did nothing else, you could follow less than a twentieth of the entertainment that’s available at the turn of a switch! No wonder people are becoming passive sponges — absorbing but never creating. Did you know that the average viewing time per person is now three hours a day? Soon people won’t be living their own lives any more. It will be a full-time job keeping up with the various family serials on TV!
I don't think Clarke was really attempting to make a firm prediction in this statement (which is essentially made in passing), but it's amusing to think how much he got right and how much he got wrong. Considering that he was writing this book in the early 1950s, he actually did make a pretty decent prediction when it came to average viewing time per person. In the US, the number is more like 4-5 hours a day (I'm betting that this will be in decline, especially in this year of the WGA strike), but worldwide, it's probably down around 3 hours a day. On the other hand, Clarke drastically underestimated the amount of content made available and also the effect of so much content.

The United States alone has 2,218 stations, which is over 4 times as many stations as Clarke had predicted hours. If we assume each station only broadcasts for an average of 16 hours a day, that works out to be over 35,000 hours of programming (70 times as much as Clarke had predicted for both TV and radio). And this doesn't even count things like On Demand, DVDs, and newer entertainment mediums like the Internet (which includes stuff like You Tube and Podcasts,etc... in addition to the standard textual data) and Video Games.

Which brings me to the other interesting thing about Clarke's prediction. He seemed to think that when that much entertainment became readily available, we would become "passive sponges — absorbing but never creating." But in today's world, the opposite seems true. Indeed, content creation seems to be accelerating. To be sure, Clarke was right in the general sense that massive amounts of data do indeed come with problems of their own. Clarke is certainly right to note that you can only really experience a tiny fraction of what's out there at any given time, and this can be an issue. Ironically, a google search for "Information Overload" yields 2,150,000 results, which is as good an example as any. On a personal level, I don't think this goes as far as, say, Nicholas Carr seems to think, and as long as we find ways around the mammoth amounts of data we're all expected to assimilate on a daily basis (stuff like self-censorship seems to help), we should be fine.
Posted by Mark on July 30, 2008 at 07:06 PM .: link :.


End of This Day's Posts

Wednesday, April 02, 2008

Summoner Geeks
Via Haibane.info, I stumbled across this:


It's pretty funny and I got a little curious about the history of this thing. Apparently a sketch comedy troupe in Wisconsin called the Dead Alewives put together an album featuring a parody of Dungeons & Dragons. The audio skit is pretty funny by itself, and it's been making the rounds on radio and the internet ever since the mid 1990s. In 2000, a bunch of developers at a video game company, Volition (they made Descent, Red Faction, and of course, Summoner), made an animated version, and distrubuted it along with their games (it's in some promotional material and if you win the game, you see it there as well). So it went from an improvisational comedy group, to a CD they made, to the radio, to the internet, got mashed up with visuals from other video games, and has now finally made its way to me (about 12 years later).
Posted by Mark on April 02, 2008 at 10:42 PM .: link :.


End of This Day's Posts

Sunday, March 23, 2008

Vigilantes
I recently finished watching both seasons of Dexter. The series has a fascinating premise: the titular hero, Dexter Morgan, is a forensic analyst (he's a "blood spatter expert") for the Miami police by day, but a serial killer by night. He operates by a "code," only murdering other murderers (usually ones who've beaten the system). The most interesting thing about Dexter's code is the implication that he does not follow the code out of some sort of dedication to morality or justice. He knows what he does is evil, but he follows his code because it's the most constructive way to channel his aggression. Of course, the code is not perfect, and a big part of the series is how the code shapes him and how he, in turn, shapes it. To be honest, watching the series is a little odd and disturbing when you realize that you're essentially rooting for a serial killer (an affable and charming one, to be sure, but that's part of why it's disturbing). I started to think about this a bit, and several other examples of similar characters came to mind. There's a lot more to the series, but I don't want to ruin it with a spoiler-laden discussion here. Instead, I want to talk about vigilantes.

Despite the lack of concern for justice (or perhaps because of that), Dexter is essentially a vigilante... someone who takes the law into his own hands. There is, of course, a long history of vigilantism, in both real life and art. Indeed, many classic instances happened long before the word vigilante was coined - for example, Robin Hood. He stole from the rich to give to the poor, and was immortalized as a folk hero whose tales are still told to this day. I think there is a certain cultural fascination with vigilantes, especially vigilantes in art.

Take superheroes, most of whom are technically vigilantes. Sure, many stand for all that is good in the world and often cite truth and justice as motivation, but the evolution of comic books shows something interesting. I haven't read a whole lot of comic books (especially of the superhero kind), but the impression I get is that when the craze started in the 1930s, it was all about heroics and people serving the common good. There was also a darker edge to some of them, and that edge has grown as time progressed. Batman is probably the most relevant to this discussion, as he shares a complicated relationship with the police and a certain above-the-law attitude towards solving crimes. Interestingly, the Batman of the 1930s was probably a darker, more violent superhero than he was in the 1940s, when one editor issued a decree that the character could no longer kill or use a gun. As such, the postwar Batman became more of an upstanding citizen, and the stories took on a lighter tone (definitely an understandable direction, considering what the world had been through). I'm sure I'm butchering the Batman chronology here, but the next sigificant touchstone for Batman came in 1986, with the publication of Batman: The Dark Knight Returns. Written and drawn by Frank Miller, the series reintroduced Batman as a dark, brooding character with complex psychological issues. A huge success, this series ushered in a new era of "grim and gritty" superheros that still holds today.

In general, our superheroes have become much more conflicted. Many (like Batman) tackle the vigilante aspect head on, and if you look at something like Watchmen (or The Incredibles, if you want a lighter version), you can see a shift in the way such stories are told. I'm sure there are literally hundreds of other examples in the comic book world, but I want to shift gears for a moment and examine another cultural icon that Dexter reminded me of: Dirty Harry.

Inspector Harry Callahan is an incredibly popular character, but apparently not with critics:
Critics have rarely cracked the whip harder than on the Dirty Harry film series, which follows the exploits of a trigger-happy San Francisco cop named Harry Callahan and his junior partners, usually not long for this world. On its release in 1971, Dirty Harry was trounced as 'fascist medievalism' by the potentate of the haut monde critic set, Pauline Kael, as well as aspiring Kaels like young Roger Ebert. Especially irksome to the criterati was a key moment in the film when Inspector Callahan, on the trail of an elusive serial sniper, is reprimanded by his superiors for not taking into account the suspect's Miranda rights. Callahan replies, through clenched teeth, "Well, I'm all broken up about that man's rights." Take that, Miranda.
I should say that critics often give the film (at least, the first one) generally good overall marks, praising its "suspense craftsmanship" or calling it "a very good example of the cops-and-killers genre." But I'm fascinated by all the talk of fascism. Despite working within the system, Dirty Harry indeed does take the law into his own hands, and in doing so he ignores many of our treasured Constitutional freedoms. And yet we all cheer him on, just as we cheer Batman and Dexter.

Why are these characters so popular? Why do we cheer such characters on even when we know what they're doing is ultimately wrong? I think it comes down to desire. We all desire justice. We want to see wrongs being made right, yet every day we can turn on the TV and watch non-stop failures of our system, whether it be rampant crime or a criminal going free or any other number of indignities. Now, I'm not an expert, but I don't think our society today is much worse off than it was, say, a hundred years ago (In fact, I think we're significantly better off, but that's another discussion). The big difference is that information is disseminated more widely and quickly, and dramatic failures of the system are attention grabbing, so that's what we get. What's more, these stories tend to focus on the most dramatic, most obscene examples. It's natural for people to feel helpless in the face of such news, and I think that's why everyone tends to embrace vigilante stories (note that people don't generally embrace actual real-life vigilantes - that's important, and we'll get to that later). Such stories serve many purposes. They allow us to cope with life's tragedies, internalize them and in some way comfort us, but as a deeper message, they also emphasize that the world is not perfect, and that we'll probably never solve the problem of crime. In some ways, they act as a critique of our system, pointing out it's imperfections and thereby making sure we don't become complacent in the ever-changing fight against crime.

Of course, there is a danger to this way of thinking, which is why critics like Pauline Kael get all huffy when they watch something like Dirty Harry. We don't want to live in a police state, and to be honest, a real cop who acted like Dirty Harry would probably be an awful cop. Films like that deal in extremes because they're trying to make a point, and it's easy to misinterpret such films. I doubt people would really accept a cop like Dirty Harry. Sure, some folks might applaud his handling of the Scorpio case that the film documents (audiences certainly did!), but police officers don't handle a single case in the course of their career, and most cases aren't that black and white either. Dirty Harry would probably be fired out here in the real world. Ultimately, while we revel in such entertainment, we don't actually want real life to imitate art in this case. However, that doesn't mean we enjoy hearing about a vicious drug dealer going free because the rules of evidence were not followed to the letter. I think deep down, people understand that concepts like the rules of evidence are important, but they can also be extremely frustrating. This is why we have conflicting emotions when we watch the last scene in Dirty Harry, in which he takes off his police badge and throws it into the river.

I think this is a large part of why vigilante stories have evolved. Comic book heroes like Batman have become more conflicted, and newer comic books often deal with the repercussions of vigilatism. The Dirty Harry sequel, Magnum Force, was apparently made as a direct answer to the critics of Dirty Harry who thought that film was openly advocating law-sanctioned vigilantism. In Magnum Force, the villains are vigilante cops. Then you have modern day vigilantes like Dexter, which pumps audiences full of conflicting emotions. I like this guy, but he's a serial killer. He's stopping other killers, but he's doing so in such a disturbing way.

Are vigilante stories fascist fantasies? Perhaps, but fantasies aren't real. They're used to illustrate something, and in the case of vigilante fantasies, they illustrate a desire for justice. The existence of a show like Dexter will repulse some people and that's certainly an understandable reaction. In fact, I think that's exactly what the show's creators want to do. They're walking the line between satisfying the desire for justice while continually noting that Dexter is not a good person. Ironically, what would repulse me more would be the complete absence of stories like Dexter, because the only way such a thing could happen would be if everyone thought our society was perfect. Perhaps someday concepts like justice and crime will be irrelevant, but that day ain't coming soon, and until it does, we'll need such stories, if only to remind us that we don't live in a perfect world.
Posted by Mark on March 23, 2008 at 07:16 PM .: link :.


End of This Day's Posts

Sunday, December 23, 2007

The Two Days of Christmas
I suppose I could have done a 12 days of Christmas post in the vein of the 4 weeks of Halloween posts, but there's obviously no time left. So here are a few things I've watched, read, or listened to recently in preparation for Christmas. That's all for now. Mery Christmas!
Posted by Mark on December 23, 2007 at 09:25 PM .: link :.


End of This Day's Posts

Wednesday, December 05, 2007

Rhetorical Strategy
Every so often, I see someone who is genuinely concerned with reaching the unreachable. Whether it be scientists who argue about how to frame their arguments, alpha-geek programmers who try to figure out how to reach typical, average programmers, or critics who try to open a dialogue with feminists. Debates tend to polarize, and when it comes to politics or religion, assumptions of bad faith on both sides tend to derail discussions pretty quickly.

How do you reach the unreachable? Naturally, the topic is much larger than a single blog entry, but I did run accross an interesting post by Jon Udell that outlines Charles Darwin's rhetorical strategy in the book, On the Origin of Species (which popularized the theory of evolution).
Darwin, says Slatkin, was like a salesman who finds lots of little ways to get you to say yes before you're asked to utter the big yes. In this case, Darwin invited people to affirm things they already knew, about a topic much more familiar in their era than in ours: domestic species. Did people observe variation in domestic species? Yes. And as Darwin piles on the examples, the reader says, yes, yes, OK, I get it, of course I see that some pigeons have longer tail feathers. Did people observe inheritance? Yes. And again, as he piles on the examples, the reader says yes, yes, OK, I get it, everyone knows that that the offspring of longer-tail-feather pigeons have longer tail feathers.

By the time Darwin gets around to asking you to say the big yes, it's a done deal. You've already affirmed every one of the key pillars of the argument. And you've done so in terms of principles that you already believe, and fully understand from your own experience.

It only took a couple of years for Darwin to formulate the idea of evolution by natural selection. It took thirty years to frame that idea in a way that would convince other scientists and the general public. Both the idea, and the rhetorical strategy that successfully communicated it, were great innovations.
I think Udell simplifies the inception and development of the idea of evolution, but I think the point generally holds. Darwin's ideas didn't come into mainstream prominence until he published his book, decades after he had begun his work. Obviously, Darwin's strategy isn't applicable in every situation, but it is an interesting place to start (I suppose we should keep in mind that evolution is still controversial amongst the mainstream)...
Posted by Mark on December 05, 2007 at 08:29 PM .: link :.


End of This Day's Posts

Wednesday, November 28, 2007

Facial Expressions and the Closed Eye Syndrome
I've been reading Malcolm Gladwell's book, Blink, and one of the chapters focuses on the psychology of facial expressions. Put simply, we wear our emotions on our face, and some enterprising psychologists took to mapping the distinct muscular movements that the human face can make. It's an interesting process, and it turns out that people who learn these facial expressions (of which there are many) are eerily good at recognizing what people are really thinking, even if they aren't trying to show it. It's almost like mind reading, and we all do it to some extent or another (mostly, we do it unconsciously). Body language and facial expressions are packed with information, and we'd all be pretty much lost without that kind of feedback (perhaps why misunderstandings are more common on the phone or in email). Most of the time, our expressions are voluntary, but sometimes they're not. Even if we're trying to suppress our expressions, a fleeting look may cross our faces. Often, these "micro-expressions" last only a few milliseconds and are imperceptible, but when trained psychologists watch video of, say, Harold "Kim" Philby (a notorious soviet spy) giving a press conference, they're able to read him like a book (slow motion helps).

I found this example interesting, and it highlights some of the subtle differences that can exist between expressions (in this case, between a voluntary and involuntary expression):
If I were to ask you to smile, you would flex your zygomatic major. By contrast, if you were to smile spontaneously, in the presence of genuine emotion, you would not only flex your zygomatic but also tighten the orbicularis oculi, pars orbitalis, which is the muscle that encircles the eye. It is almost impossible to tighten the orbicularis oculi, pars orbitalis on demand, and it is equally difficult to stop it from tightening when we smile at something genuinely pleasurable.
I found that interesting in light of the Closed Eye Syndrome I noticed in Anime. I wonder how that affects the way we perceive Anime. If a smiling mouth by itself means a fake expression of happiness while a smiling mouth and closed eyes means genuine emotion, does that make the animation more authentic? Animation obviously doesn't have the fidelity of video or film, but we can obviously read expressions from animated faces, so I would expect that closed eye syndrome exists more because of accuracy than anything else. In my original post on the subject, Roy noted that the reason I noticed closed eyes in anime could have something to do with the way Japan and the US read emotion. He pointed to an article that claimed Americans focus more on the mouth while the Japanese focus more on the eyes when trying to read emotions from facial expressions. One example from the article was emoticons. For happiness, Americans use a smily face :) while the Japanese tend to use ^_^ (which seems to be a face with eyes closed). That might still be part of it, but ever since I made the observation, I've noticed similar expressions in American animation (I just recently noticed it a lot in a Venture Bros. episode). Still, occurrences in American animation seem less frequent (or perhaps less obvious), so perhaps the observation still holds.

Gladwell's book is interesting, as expected, though I'm not sure yet if he has a point other than to observe that we do a lot of subconscious analysis and make lots of split decisions, and sometimes this is good (other times it's not). Still, he's good at finding examples and drilling down into the issue, and even if I'm not sure about his conclusions, it's always fun to read. There's lots more on this subject in the book (for instance, he goes over how facial expressions and our emotions are a two way phenomenon - meaning that if you intentionally contort your face in an specific way, you can induce certain emotions. The psychologists I mentioned earlier who were mapping expressions noticed that after a full day of trying to manipulate their facial muscles to show anger (even though they weren't angry) they felt horrible. Some tests have been done to confirm that, indeed, our facial expressions are linked directly to our brain) and it's probably worth a read if that's your bag.
Posted by Mark on November 28, 2007 at 08:19 PM .: link :.


End of This Day's Posts

Sunday, November 18, 2007

The Paradise of Choice?
A while ago, I wrote a post about the Paradox of Choice based on a talk by Barry Schwartz, the author of a book by the same name. The basic argument Schwartz makes is that choice is a double-edged sword. Choice is a good thing, but too much choice can have negative consequences, usually in the form of some kind of paralysis (where there are so many choices that you simply avoid the decision) and consumer remorse (elevated expectations, anticipated regret, etc...). The observations made by Schwartz struck me as being quite astute, and I've been keenly aware of situations where I find myself confronted with a paradox of choice ever since. Indeed, just knowing and recognizing these situations seems to help deal with the negative aspects of having too many choices available.

This past summer, I read Chris Anderson's book, The Long Tail, and I was a little pleasantly surprised to see a chapter in his book titled "The Paradise of Choice." In that chapter, Anderson explicitely addresses Schwartz's book. However, while I liked Anderson's book and generally agreed with his basic points, I think his dismissal of the Paradox of Choice is off target. Part of the problem, I think, is that Anderson is much more concerned with the choices rather than the consequences of those choices (which is what Schwartz focuses on). It's a little difficult to tell though, as Anderson only dedicates 7 pages or so to the topic. As such, his arguments don't really eviscerate Schwartz's work. There are some good points though, so let's take a closer look.

Anderson starts with a summary of Schwartz's main concepts, and points to some of Schwartz's conclusions (from page 171 in my edition):
As the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize.
Now, the way Anderson presents this is a bit out of context, but we'll get to that in a moment. Anderson continues and then responds to some of these points (again, page 171):
As an antidote to this poison of our modern age, Schwartz recommends that consumers "satisfice," in the jargon of social science, not "maximize". In other words, they'd be happier if they just settled for what was in front of them rather than obsessing over whether something else might be even better. ...

I'm skeptical. The alternative to letting people choose is choosing for them. The lessons of a century of retail science (along with the history of Soviet department stores) are that this is not what most consumers want.
Anderson has completely missed the point here. Later in the chapter, he spends a lot of time establishing that people do, in fact, like choice. And he's right. My problem is twofold: First, Schwartz never denies that choice is a good thing, and second, he never advocates removing choice in the first place. Yes, people love choice, the more the better. However, Schwartz found that even though people preferred more options, they weren't necessarily happier because of it. That's why it's called the paradox of choice - people obviously prefer something that ends up having negative consequences. Schwartz's book isn't some sort of crusade against choice. Indeed, it's more of a guide for how to cope with being given too many choices. Take "satisficing." As Tom Slee notes in a critique of this chapter, Anderson misstates Schwartz's definition of the term. He makes it seem like satisficing is settling for something you might not want, but Schwartz's definition is much different:
To satisfice is to settle for something that is good enough and not worry about the possibility that there might be something better. A satisficer has criteria and standards. She searches until she finds an item that meets those standards, and at that point, she stops.
Settling for something that is good enough to meet your needs is quite different than just settling for what's in front of you. Again, I'm not sure Anderson is really arguing against Schwartz. Indeed, Anderson even acknowledges part of the problem, though he again misstate's Schwartz's arguments:
Vast choice is not always an unalloyed good, of course. It too often forces us to ask, "Well, what do I want?" and introspection doesn't come naturally to all. But the solution is not to limit choice, but to order it so it isn't oppressive.
Personally, I don't think the problem is that introspection doesn't come naturally to some people (though that could be part of it), it's more that some people just don't give a crap about certain things and don't want to spend time figuring it out. In Schwartz's talk, he gave an example about going to the Gap to buy a pair of jeans. Of course, the Gap offers a wide variety of jeans (as of right now: Standard Fit, Loose Fit, Boot Fit, Easy Fit, Morrison Slim Fit, Low Rise Fit, Toland Fit, Hayes Fit, Relaxed Fit, Baggy Fit, Carpenter Fit). The clerk asked him what he wanted, and he said "I just want a pair of jeans!"

The second part of Anderson's statement is interesting though. Aside from again misstating Schwartz's argument (he does not advocate limiting choice!), the observation that the way a choice is presented is important is interesting. Yes, the Gap has a wide variety of jean styles, but look at their website again. At the top of the page is a little guide to what each of the styles means. For the most part, it's helpful, and I think that's what Anderson is getting at. Too much choice can be oppressive, but if you have the right guide, you can get the best of both worlds. The only problem is that finding the right guide is not as easy as it sounds. The jean style guide at Gap is neat and helpful, but you do have to click through a bunch of stuff and read it. This is easier than going to a store and trying all the varieties on, but it's still a pain for someone who just wants a pair of jeans dammit.

Anderson spends some time fleshing out these guides to making choices, noting the differences between offline and online retailers:
In a bricks-and-mortar store, products sit on the shelf where they have been placed. If a consumer doesn't know what he or she wants, the only guide is whatever marketing material may be printed on the package, and the rough assumption that the product offered in the greatest volume is probably the most popular.

Online, however, the consumer has a lot more help. There are a nearly infinite number of techniques to tap the latent information in a marketplace and make that selection process easier. You can sort by price, by ratings, by date, and by genre. You can read customer reviews. You can compare prices across products and, if you want, head off to Google to find out as much about the product as you can imagine. Recommendations suggest products that 'people like you' have been buying, and surprisingly enough, they're often on-target. Even if you know nothing about the category, ranking best-sellers will reveal the most popular choice, which both makes selection easier and also tends to minimize post-sale regret. ...

... The paradox of choice is simply and artifact of the limitations of the physical world, where the information necessary to make an informed choice is lost.
I think it's a very good point he's making, though I think he's a bit too optimistic about how effective these guides to buying really are. For one thing, there are times when a choice isn't clear, even if you do have a guide. Also, while I think retailers that offer Recommendations based on what other customer purchases are important and helpful, who among us hasn't seen absurd recommendations? From my personal experience, a lot of people don't like the connotations of recommendations either (how do they know so much about me? etc...). Personally, I really like recommendations, but I'm a geek and I like to figure out why they're offering me what they are (Amazon actually tells you why something is recommended, which is really neat). In any case, from my own personal anecdotal observations, no one puts much faith in probablistic systems like recommendations or ratings (for a number of reasons, such as cheating or distrust). There's nothing wrong with that, and that's part of why such systems are effective. Ironically, acknowledging their imperfections allow users to better utilize the systems. Anderson knows this, but I think he's still a bit too optimistic about our tools for traversing the long tail. Personally, I think they need a lot of work.

When I was younger, one of the big problems in computing was storage. Computers are the perfect data gatering tool, but you need somewhere to store all that data. In the 1980s and early 1990s, computers and networks were significantly limited by hardware, particularly storage. By the late 1990s, Moore's law had eroded this deficiency significantly, and today, the problem of storage is largely solved. You can buy a terrabyte of storage for just a couple hundred dollars. However, as I'm fond of saying, we don't so much solve problems as trade one set of problems for another. Now that we have the ability to store all this information, how do we get at it in a meaninful way? When hardware was limited, analysis was easy enough. Now, though, you have so much data available that the simple analyses of the past don't cut it anymore. We're capturing all this new information, but are we really using it to its full potential?

I recently caught up with Malcolm Gladwell's article on the Enron collapse. The really crazy thing about Enron was that they didn't really hide what they were doing. They fully acknowledged and disclosed what they were doing... there was just so much complexity to their operations that no one really recognized the issues. They were "caught" because someone had the persistence to dig through all the public documentation that Enron had provided. Gladwell goes into a lot of detail, but here are a few excerpts:
Enron's downfall has been documented so extensively that it is easy to overlook how peculiar it was. Compare Enron, for instance, with Watergate, the prototypical scandal of the nineteen-seventies. To expose the White House coverup, Bob Woodward and Carl Bernstein used a source-Deep Throat-who had access to many secrets, and whose identity had to be concealed. He warned Woodward and Bernstein that their phones might be tapped. When Woodward wanted to meet with Deep Throat, he would move a flower pot with a red flag in it to the back of his apartment balcony. That evening, he would leave by the back stairs, take multiple taxis to make sure he wasn't being followed, and meet his source in an underground parking garage at 2 A.M. ...

Did Jonathan Weil have a Deep Throat? Not really. He had a friend in the investment-management business with some suspicions about energy-trading companies like Enron, but the friend wasn't an insider. Nor did Weil's source direct him to files detailing the clandestine activities of the company. He just told Weil to read a series of public documents that had been prepared and distributed by Enron itself. Woodward met with his secret source in an underground parking garage in the hours before dawn. Weil called up an accounting expert at Michigan State.

When Weil had finished his reporting, he called Enron for comment. "They had their chief accounting officer and six or seven people fly up to Dallas," Weil says. They met in a conference room at the Journal's offices. The Enron officials acknowledged that the money they said they earned was virtually all money that they hoped to earn. Weil and the Enron officials then had a long conversation about how certain Enron was about its estimates of future earnings. ...

Of all the moments in the Enron unravelling, this meeting is surely the strangest. The prosecutor in the Enron case told the jury to send Jeffrey Skilling to prison because Enron had hidden the truth: You're "entitled to be told what the financial condition of the company is," the prosecutor had said. But what truth was Enron hiding here? Everything Weil learned for his Enron expose came from Enron, and when he wanted to confirm his numbers the company's executives got on a plane and sat down with him in a conference room in Dallas.
Again, there's a lot more detail in Gladwell's article. Just how complicated was the public documentation that Enron had released? Gladwell gives some examples, including this one:
Enron's S.P.E.s were, by any measure, evidence of extraordinary recklessness and incompetence. But you can't blame Enron for covering up the existence of its side deals. It didn't; it disclosed them. The argument against the company, then, is more accurately that it didn't tell its investors enough about its S.P.E.s. But what is enough? Enron had some three thousand S.P.E.s, and the paperwork for each one probably ran in excess of a thousand pages. It scarcely would have helped investors if Enron had made all three million pages public. What about an edited version of each deal? Steven Schwarcz, a professor at Duke Law School, recently examined a random sample of twenty S.P.E. disclosure statements from various corporations-that is, summaries of the deals put together for interested parties-and found that on average they ran to forty single-spaced pages. So a summary of Enron's S.P.E.s would have come to a hundred and twenty thousand single-spaced pages. What about a summary of all those summaries? That's what the bankruptcy examiner in the Enron case put together, and it took up a thousand pages. Well, then, what about a summary of the summary of the summaries? That's what the Powers Committee put together. The committee looked only at the "substance of the most significant transactions," and its accounting still ran to two hundred numbingly complicated pages and, as Schwarcz points out, that was "with the benefit of hindsight and with the assistance of some of the finest legal talent in the nation."
Again, Gladwell's article has a lot of other details and is a fascinating read. What interested me the most, though, was the problem created by so much data. That much information is useless if you can't sift through it quickly or effectively enough. Bringing this back to the paradise of choice, the current systems we have for making such decisions are better than ever, but still require a lot of improvement. Anderson is mostly talking about simple consumer products, so none are really as complicated as the Enron case, but even then, there are still a lot of problems. If we're really going to overcome the paradox of choice, we need better information analysis tools to help guide us. That said, Anderson's general point still holds:
More choice really is better. But now we know that variety alone is not enough; we also need information about that variety and what other consumers before us have done with the same choices. ... The paradox of choice turned out to be more about the poverty of help in making that choice than a rejection of plenty. Order it wrong and choice is oppressive; order it right and it's liberating.
Personally, while the help in making choices has improved, there's still a long way to go before we can really tackle the paradox of choice (though, again, just knowing about the paradox of choice seems to do wonders in coping with it).

As a side note, I wonder if the video game playing generations are better at dealing with too much choice - video games are all about decisions, so I wonder if folks who grew up working on their decision making apparatus are more comfortable with being deluged by choice.
Posted by Mark on November 18, 2007 at 09:47 PM .: link :.


End of This Day's Posts

Wednesday, October 17, 2007

The Spinning Silhouette
This Spinning Silhouette optical illusion is making the rounds on the internet this week, and it's being touted as a "right brain vs left brain test." The theory goes that if you see the silhouette spinning clockwise, you're right brained, and you're left brained if you see it spinning counterclockwise.

Everytime I looked at the damn thing, it was spinning a different direction. I closed my eyes and opened them again, and it spun a different direction. Every now and again, and it would stay the same direction twice in a row, but if I looked away and looked back, it changed direction. Now, if I focus my eyes on a point below the illusion, it doesn't seem to rotate all the way around at all, instead it seems like she's moving from one side to the other, then back (i.e. changing directions every time the one leg reaches the side of the screen - and the leg always seems to be in front of the silhouette).

Of course, this is the essense of the illusion. The silhouette isn't actually spinning at all, because it's two dimensional. However, since my brain is used to living in a three dimensional world (and thus parsing three dimensional images), it's assuming that the image is also three dimensional. We're actually making lots of assumptions about the image, and that's why we can see it going one way or the other.

Eventually, after looking at the image for a while and pondering the issues, I got curious. I downloaded the animated gif and opened it up in the GIMP to see how the frames are built. I could be wrong, but I'm pretty sure this thing is either broken or it's cheating. Well, I shouldn't say that. I noticed something off on one of the frames, and I'd be real curious to know how that affects people's perception of the illusion (to me, it means the image is definitely moving counterclockwise). I'm almost positive that it's too subtle to really affect anything, but I did find it interesting. More on this, including images and commentary, below the fold. First thing's first, here's the actual spinning silhouette.

The Spinning Silhouette

Again, some of you will see it spinning in one direction, some in the other direction. Everyone seems to have a different trick for getting it to switch direction. Some say to focus on the shadow, some say to look at the ankles. Closing my eyes and reopening seems to do the trick for me. Now let's take a closer look at one of the frames. Here's frame 12:

In frame 12, the illusion is still intact

Looking at this frame, you should be able to switch back and forth, seeing the leg behind the person or in front of the person. Again, because it's a silhouette and a two dimensional image, our brain usually makes an assumption of depth, putting the leg in front or behind the body. Switching back and forth on this static image was actually a lot easier for me. Now the tricky part comes in the next frame, number 13 (obviously, the arrow was added by me):

In frame 13, there is a little gash in the leg

Now, if you look closely at the leg, you'll see a little imperfection in the silhouette. Maybe I'm wrong, but that little gash in the leg seems to imply that the leg is behind the body. If you try, you can still get yourself to see the image as having the leg in front, but then you've got this gash in the leg that just seems very out of place.

So what to make of this? First, the imperfection is subtle enough (it's on 1 frame out of 34) that everyone still seems to be able to see it rotate in both directions. Second, maybe I'm crazy, and the little gash doesn't imply what I think. Anyone have alternative explanations? Third, is that imperfection intentional? If so, why? It does not seem necessary, so I'd be curious to know if the creators knew about it, and what their intention was regarding it.

Finally, as far as the left brain versus right brain portion, I find that I don't really care, but I am interested in how the imperfection would affect this "test." This neuroscientist seems to be pretty adamant about the whole left/right thing being hogwash though:
...the notion that someone is "left-brained" or "right-brained" is absolute nonsense. All complex behaviours and cognitive functions require the integrated actions of multiple brain regions in both hemispheres of the brain. All types of information are probably processed in both the left and right hemispheres (perhaps in different ways, so that the processing carried out on one side of the brain complements, rather than substitutes, that being carried out on the other).
At the very least, the traditional left/right brain theory is a wildly oversimplified version of what's really happening. The post also goes into the way the brain "fill in the gaps" for confusing visual information, thus allowing the illusion.

Update: Strange - the image appears to be rotating MUCH faster in Firefox than in Opera or IE. I wonder how that affects perception.
Posted by Mark on October 17, 2007 at 10:42 PM .: link :.


End of This Day's Posts

Wednesday, October 03, 2007

Groping and Probing
So a few recent installments of Shamus' new comic, Chainmail Bikini, has created a bit of controversy. The comics in question are actually a series of 3 (the fact that there are 3 is a key part of the controversy, but we'll get to that in a moment). Here they are: The controversy stems from the fact that there is a malicious groping in comic #6. Perhaps due to an ill-advised punchline ("improved stamina"), the discussion turned from one of groping and larping and into one of rape. And we all know how funny discussions of rape can get.

To be honest, I didn't find this particular arc in the comics very funny. However, I didn't find it very offensive either (though I can see why some might think so). Also, while I didn't find it especially funny, I do think it makes an interesting statement about gaming in general.

I don't tend to read web-comics the same way I read blogs. I tend to let several installments build up, and then read them all. So I didn't read this particular story arc until I knew about the controversy, and I must admit to a little bit of observer bias. Knowing there was a controversy colored my reading of the comic, and two things immediately struck me.

First is that while there is an element of one guy antagonizing his buddy, there is also an element of probing. By probing, I'm referring to exploration of the limits of a game and its possibilities. Steven Johnson's book Everything Bad is Good for You has a chapter on Video Games which covers this concept really well, and I recently wrote about it:
Probing is essentially exploration of the game and its possibilities. Much of this is simply the unconscious exploration of the controls and the interface, figuring out how the game works and how you're supposed to interact with it. However, probing also takes the more conscious form of figuring out the limitations of the game. For instance, in a racing game, it's usually interesting to see if you can turn your car around backwards, pick up a lot of speed, then crash head-on into a car going the "correct" way.
Now again, in comic #6, one character is clearly attempting to antagonize his friend for choosing to role play a woman. However, I find it interesting that he chose to do so in such a way that is consistent with his character (who is a Chaotic Neutral barbarian) and followed the rules of the game (rolling die, etc...). According to the notes that accompany this arc, this sort of thing tends to happen when a campaign is not going well. If the players aren't having fun, they're going to make fun, and in if you're in a role playing game, they're going to do so by making their characters do something a little extreme. They don't do this because they are really extreme people, but because they want to see what happens. In short, they want to knock the game off it's boring rails. In this case, one player's character player groped another player's character. And from the aftermath in comics #7 and #8, you can see that things certainly got interesting. However, you also see that there were indeed consequences for the groping (one player physically assaults the other), and the comments that accompany each comic clearly attest that this is, in fact, a bad thing. To me, it's clear that the character in the comic is engaging in probing, but the comic also makes it clear that in a game that is as open-ended as D&D, it's possible to take things so far, which is why you saw a "real-world" reprisal (scare quotes due to the fact that this is a fictional comic, after all).

The second thing that struck me also had to do with the consequences. The situation immediately reminded me of this post from my friend Roy's feminist blog. He found this german poster which has a picture accompanied by this text:
Warning! Women defend themselves! If you leer at, catcall, or touch a woman, take into account that you might be loudly ridiculed, have a glass of beer poured over you, or be slapped in the face. Therefore, we strongly advise you to refrain from such harrassment!
This is exactly what happend in comics #6 - #8. Well, not exactly. The comics actually take the consequences even further, while further abstracting the situation. Let me elaborate. The poster that Roy is pointing to is talking about real life situations. If you grope some woman at a bar, expect to be slapped in the face (or worse). What happened in the comics? An imaginary character who was role playing his own imaginary character groped another imaginary character that was being role played by yet another imaginary character. No one actually exists in this scenario, and yet there are indeed consequences for the groping. In fact, the consequences were the entire point of this character arc. So when I read comics #6-#8, I immediately saw them as a demonstration of Roy's poster. (Ironically, you could even read into this more, saying that the consequences have actually broken free of the imaginary world of Chainmail Bikini and taken root in the real world - in the form of a long comment thread and multiple blog postings like this one).

Now, if one were so inclined, I can see why this arc would be grating. Personally, it doesn't bother me, but I've never been groped (er, against my will) and I can certainly understand how that could be off-putting (I suppose an argument could be made that there are some other gender issues as well). And as an astute commenter at Shamus' site points out, a lot of why this comic doesn't work as humor is due to the structure of the story:
A lot of why this doesn't work well as humour, and why it's ended up annoying people, is to do with the structure of the comic. I think Shamus really struggled with fitting a potentially amusing gag into the three-panel format, and ultimately didn't manage it successfully.

Here's what I mean. Comic 6 Panel 1 has the line "I'm exploring gender roles within the context of a roleplaying environment". The barbarian's player throws these words back in comic 7 panel 2. It's the punchline of a five-panel gag split over two comics. Structurally, this is a mess. It leads to a lame second gag to fill the rest of comic 7, but more importantly it means some sort of not-quite-a-punchline has to be contrived for the end of comic 6. That's where "improved stamina" comes from. Whatever is said in subsequent comics, it is really hard to read comic 6 in isolation without inferring that the barbarian's player intends to have his character vigorously sexually assault the female character. Because this is the last line of the comic, the additional implication is that we are meant to find this funny in itself. No wonder some people got offended.

Now, imagine doing the same thing over a slightly longer single comic of four or five panels. You would cut the "improved stamina" line for a start - it would serve no purpose any more. Instead, the comic ends on "I prefer to think of it as exploring gender roles within the context of a roleplaying environment". The first advantage of this is that it's a lot funnier. The punchline is where it's supposed to be, not buried half-way through the next comic. The second advantage is that the potential for offending readers is greatly reduced. It no longer reads as though we're meant to find rape or sexual assault funny: the humour is in the elf's player having his pretentiousness deflated in a basically harmless, if tasteless, way.
Shamus himself has noted that this explanation is not only accurate, but a good explanation as to why people are offended by what he essentially saw as a harmless joke. This makes sense to me. He wrote a strip that touched on a controversial subject in a humorous way, but then he was forced to cut it up and insert artificial punchlines, one of which implied more than he thought. From his point of view, the comic is basically the same as before, but just split up a little. All the sudden people start talking about rape and unsubscribing to the comic. I can see why he'd be a bit perplexed by even a reasonable objection to the comic.

I've never been a particularly great writer. When I was in high school, I always excelled at math and science, but never did especially well at english or writing. By college, I was much more comfortable with writing, and part of the reason for that was that I realized that writing isn't precise. Language is inherently vague and open to interpretation, and though there are some people who can wield language astoundingly well, most of us will open ourselves up to criticism simply by the act of experessing ourselves. One of my favorite quotes summarizes this well:
"To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!"
- Anne Murrow Lindbergh
Unfortunately, this simple miscommunication seems to have gotten lost in a thread of almost 200 comments. Some people have quit reading the comic altogether because of some perceived malice or ignorance on Shamus' part, others have taken to turning this into a divisive debate about rape. I don't want to start a holy war here, but when it comes to controversial stuff like this, I tend to give the creators the benefit of the doubt.

I think this whole controversy has brought up some interesting ideas, even if most have reduced it to a debate about rape. For instance, probing in games often takes the form of doing something extreme. My seemingly innocuous example above was turning your racecar around and driving the wrong direction to see what happens when you ram into another car. In real life, such an action would be catastrophic and could result in multiple deaths. Now, does doing something like that speak ill of me (the player)? How does wanton vehicular homicide compare to imaginary groping?

In my limited D&D gaming career, I played a Chaotic Evil thief who stole from his own party (i.e. one of my friends). Why did I do that? In real life, I'd never do such a thing. Why would I be interested in doing it in a role playing game? At a later point, I certainly suffered the consequences for my actions, and I think that's the rub. Playing games is all about setting up a paradigm, and sometimes half the fun is attempting to pull it down and find the holes in the paradigm, just to see what happens. I think that's a big part of why open-ended games like Grand Theft Auto are so popular. It's not the act of stealing a car or murdering a stranger that's fun, it's the act of attempting to derail the game. (Again, I touched on this in a post on game manuals.) In a recent discussion on what people like about Role Playing Games (also at Shamus' site), one of the most prominent answers was that good RPGs "...must give the player lots of freedom to make their own choices." One of the things I really hated about God of War (an otherwise awsome game) was that the character I was playing was a real prick. At one point, he goes out of his way to kill an innocent bystander (something about kicking him down into the hydra maybe? I don't remember specifically.) and that really annoyed me. What happened didn't bother me so much as the fact that I didn't have a choice in the matter. I don't really have an answer here, but I like games that give me a lot of freedom, because once I get bored by the forced or scripted aspects of the game, I can probe for weaknesses in the paradigm, and maybe even exploit them.

Update: I just noticed that Roy has tackled this subject on his blog. He seems quite disheartened by Shamus' post, though Roy wrote his post before the comment I quoted above was posted... My perception was that Shamus just couldn't understand why people were objecting... but once someone actually pointed out, in detail, why the humor doesn't work, he seemed to be more understanding (not only of why people were complaining, but of what people were suggesting by their complaints). But that's just me. I don't want to put words in Shamus' mouth, but as I already mentioned, I tend to give creators the benefit of the doubt.
Posted by Mark on October 03, 2007 at 07:55 PM .: link :.


End of This Day's Posts

Sunday, September 30, 2007

Halloweeny Links
Kaedrin's own monkey research squad strikes again, with a pseduo-horror/Halloween theme. Enjoy:
  • Kernunrex's Six Weeks of Halloween 2007: He of the Chronocinethon is taking a break from exhaustively exploring movies in chronological order, and watching lots of horror flicks in the six weeks leading up to Halloween. I wish I had thought of this (and had time to implement). I think the neatest thing about his schedule is that he sneaks in a bunch of shorts and trailers between his movies (for instance, he's got the classic Simpsons episode, The Shinning and the great SNL Skit: Consumer Probe: Unsafe Halloween Costumes). I might have to do something like this in the near future. Or maybe I'll just go to the 24 Hour Horrorthon in Philly.
  • Horror Movie a Day: This guy takes to watching Horror movies with a zeal unseen since, uh, Kernunrex. Crap. Still, this guy watches 1 horror movie a day and posts a quick capsule review.
  • Dungeons & Dragons: Celebrating 30 Years of Very Stupid Monsters: What can be more fearsome than the Duckbunny? I dunno, the picture of the Squark kinda looks like Cthulhu if you don't look too close. And have poor vision. Also, with respect to the Giant Beaver (actual D&D monster), this snapple cap that's been on my desk informs me that Beavers were once the size of bears! Ok, I'll stop now. Lots of stupid monsters here.
  • The Legend of FacilityFocus: Funny "Underground Guide" for how to enter repair request using UPenn's new web interface, done in the style of an adventure video game walkthrough. Is this horror? Well, as someone whose job involves usability, this is pretty horrific.
  • Plush Hellraiser: The Box. You Snuggled It. We Came. I'm mostly linking to this because of the brilliant title, but Widge has some neat suggestions for newly released plush Hellraiser toys.
Ok, so some of those are a stretch on the Halloween theme, but work with me here.
Posted by Mark on September 30, 2007 at 10:15 PM .: link :.


End of This Day's Posts

Monday, September 24, 2007

We Could Be Heroes
Just for one day though. Apologies for the missing entry yesterday and the lame entry today. Time is still tight, so I'll just throw out a link to 5 Questions Season Two of Heroes Had Better F#@king Answer.
Unlike a certain show about people stranded on a mysterious island that we won't name, by the end of its first season NBC's hit series Heroes had managed to neatly wrap up the vast majority of its plot threads and running storylines. The cheerleader was saved; the sword was retrieved; and the exploding man was stopped. We didn't watch the finale of the mystery island show that we're not naming, but we wouldn't be surprised if Locke was left speechless by the sight of Patrick Duffy in the shower. Had it all been a dream?
Some questions I have: Will they finally just get rid of Ali Larter's dumbass subplot? Which lame, cliched plot element will they get me to fall for anyway?

Update: The answer to my second question: Amnesia.
Posted by Mark on September 24, 2007 at 11:43 PM .: link :.


End of This Day's Posts

Sunday, September 16, 2007

Fantasy Football, 2007
As I mentioned earlier in the week, my schedule is pretty tight so my time for writing (and just about everything else) has been drastically reduced. So I'm just going to introduce my 2007 fantasy football team, the Star Wars Kids. I know most of my readers aren't big sports fans, but I can probably dash this off in a half hour, which I actually have enough time for. So I did very well last year, but my team peaked early and lost in the first round of the playoffs.

I was a little worried about this year. First, I had almost no time to prepare for the draft, which isn't usually a good sign. Second, the team I drafted seemed to be relying on a lot of "comeback" seasons (players who had a bad season or two due to injury or due to their team's performance, but who could make a comeback this year). Third, I ended up with a lackluster defense and my bench is a little weak. This is due to my position in the draft. I was last but the draft is a snake, so I had the 12th and 13th pick, but then had to wait for another 2 rounds for my next pick (36 overall). This position has its advantages, but it also meant that when a run on Defense/Special Teams happened, I ended up with scraps. Fourth, as an Eagles fan, I was frustrated by the fact that I ended up with Terrell Owens. He's a great performer, but on a personal level, I hate him. And he plays for the mortal enemy of the Eagles. I also have the Cowboys defense & special teams. Put simply, when the Eagles play the Cowboys, I'm going to be pretty conflicted.

Anyway, after one and half weeks here, it seems that the team I drafted is doing quite well for itself. Many of my gambles are paying off, and I may have underestimated some of my "sure things." So here's my team:
  • Tom Brady: (QB) Yes, I had him last year and yes, I was a little disappointed by him. He did a solid job, week in and week out, but he was no Peyton Manning (I don't want to start a holy war here, but while we could debate which is better in real life, Manning has always been the better fantasy QB. ) So if I was disappointed last year, why would I spend a second round draft pick on him this year? Put simply, he's got a real receiving corps now. Last year, he did a good job and he had no good recievers! This year, he's got Randy Moss, Wes Welker and Donte Stallworth. There was perhaps a little bit of a risk here, as Moss hasn't been stellar in Oakland... but, you know, he was playing in Oakland. Who wouldn't do poorly? Anyway, Brady put up huge numbers last week, and it was a good thing too, as my opponent had Peyton Manning (interestingly, they both put up the same amount of fantasy points). As I type, Brady has put up 165 yards and 2 TDs and we're only a couple minutes into the 2nd Quarter of tonight's game. This may be my best pick of the year, though it didn't require much thought. Brady was a sure thing.
  • Travis Henry: (RB) This one had me worried. Henry is notoriously injury prone and inconsistent, has had fumbling and substance abuse problems in the past. However, Denver coach Mike Shanahan loves to run the ball, and Henry is a workhorse when he's healthy. Then again, Shanahan is notorious for giving the ball to multiple backs, which is poison for fantasy owners. So far, so good. Henry has put up solid but not stellar numbers. This is about all I could expect, but there's always the nagging fear of injury (or, uh, being arrested or something).
  • Edgerrin James: (RB) He had a bad season last year, so this was a bit of a risk. However, everything I've seen says the problem was the team he was on and not him. The Cardinals didn't run much and were pretty awful last year, so it was difficult for James to gain any ground. However, with a new head coach and some other changes, I was betting on big things from James... and so far, things are going well. He's been my top running back in the first two weeks and shows no signs of slowing down.
  • Adrian Peterson: (RB) A rookie who was originally scheduled to share the load with Chester Taylor... but when Taylor went down with an injury early in last week's game, Peterson came up huge. Unfortunately, I had him on my bench. Peterson's going to be one of the people I put into the "Flex" position from week to week, so he may spend some time on the bench (especially if Taylor comes back), but he did a reasonable job this week.
  • Terrell Owens: (WR) As much as it pains me to admit it, TO is fantasy gold, and I got him relatively late in the draft. He's been one of my top performers and I'm sure he'll remain that way. He's good for double digit touchdowns, and as much as I dislike him on a personal level, I have to admit, I like the numbers he's putting up. I wish I had the fortitude of Bill Simmons:
    Just know that he'll never be on my team. I can't root for him. It's not in me. When TO does something good, I don't want to feel happy.
    I don't like rooting for him either. Makes me feel dirty. But he was a steal when I picked him up in the draft, and he's paid off in spades. *sigh*
  • Reggie Brown: (WR) Brown is the uncontested #1 WR in Philly, but he did nothing last week. Nothing. 1 reception for 11 yards. This is absurdly lame, and adds fuel to the "I hate having to root for TO" fire. Why can't I have a hometown player I can actually root for on my team? The last time that happend was 3 or 4 years ago when I had Brian Westbrook (who also happens to be from my alma mater). If Brown, who's riding my bench this week, doesn't do well tomorrow night, I'm not sure I'll keep him on my team.
  • Jerricho Cotchery: (WR) Put up mediocre numbers last week, but came up huge this week. Considering that I drafted him in one of the later rounds, I think this was a decent pickup, and he's earned his way to the number 2 WR slot on my team (though I'm pretty weak at WR).
  • Dallas Clark: (TE) Has had injury problems and is coming off a bad year, but he appears to be healthy and it's always nice to have one of Peyton Manning's targets on your team. He's put up some pretty solid numbers for me so far, so this late round draft pick seems worth it.
  • Cowboys: (D/ST) I hate the Cowboys, but due to a run on D/ST picks inbetween my picks, I really didn't have much of an option for D/ST. The Cowboys did crappy last week, but did a decent enough job today. That's all I can really ask for, though it would be really nice not to have to root for the Cowboys.
  • David Akers: (K) Well here's a hometown player I can root for, but the Kicker isn't exactly a premier position. Still, Akers is as solid as they come, and he should be able to put up decent numbers for me.
  • Bench: Texans QB Matt Schaub seems to be my best bench player, which would be great if he didn't have the same Bye week as Tom Brady. D'oh! Nevertheless, he might make good trade bait. Or not. We'll see. For backup running backs, I've got the Bells (Mike and Tatum), neither of which is all that great (though Mike Bell is Travis Henry's backup, which could be useful if Henry goes down with an injury). Rounding out the team are Drew Bennet (WR, crappy) and the Cardinals D/ST (also crappy).
So there you have it, the 2007 Star Wars Kids. So far, they've performed far beyond expectations, putting up a league high (tied for first, actually) 107 fantasy points last week. This week, they look even better, putting up 117 points so far, and Brady still has a half game left and Akers plays tomorrow night. There are still lots of things that could go wrong, and I could peak early like I did last season, but I'm still happy with my team's performance. I took a lot of gambles and picked several sleepers, and it looks like they're all paying off... so far.

Update: Greg's draft didn't go as well as mine, but I think he'll make due.
Posted by Mark on September 16, 2007 at 07:43 PM .: link :.


End of This Day's Posts

Sunday, June 10, 2007

Referential
A few weeks ago, I wrote about how context matters when consuming art. As sometimes happens when writing an entry, that one got away from me and I never got around to the point I originally started with (that entry was originally entitled "Referential" but I changed it when I realized that I wasn't going to write anything about references), which was how much of our entertainment these days references its predecessors. This takes many forms, some overt (homages, parody), some a little more subtle.

I originally started thinking about this while watching an episode of Family Guy. The show is infamous for its random cutaway gags - little vignettes that have no connection to the story, but which often make some obscure reference to pop culture. For some reason, I started thinking about what it would be like to watch an episode of Family Guy with someone from, let's say, the 17th century. Let's further speculate that this person isn't a blithering idiot, but perhaps a member of the Royal Society or something (i.e. a bright fellow).

This would naturally be something of a challenge. There are some technical explanations that would be necessary. For example, we'd have to explain electricty, cable networks, signal processing and how the television works (which at least involves discussions on light and color). The concept of an animated show, at least, would probably be easy to explain (but it would involve a discussion of how the human eye works, to a degree).

There's more to it, of course, but moving past all that, once we start watching the show, we're going to have to explain why we're laughing at pretty much all of the jokes. Again, most of the jokes are simply references and parodies of other pieces of pop culture. Watching an episode of Family Guy with Isaac Newton (to pick a prominent Royal Society member) would necessitate a pause just about every minute to explain what each reference was from and why Family Guy's take on it made me laugh. Then there's the fact that Family Guy rarely has any sort of redeemable lesson and often deliberately skews towards actively encouraging evil (something along the lines of "I think the important thing to remember is that it's ok to lie, so long as you don't get caught." I don't think that exact line is in an episode, but it could be.) This works fine for us, as we're so steeped in popular culture that we get the fact that Family Guy is just lampooning of the notion that we could learn important life lessions via a half-hour sitcom. But I'm sure Isaac Newton would be appalled.

For some reason, I find this fascinating, and try to imagine how I would explain various jokes. For instance, the episode I was watching featured a joke concerning "cool side of the pillow." They cut to a scene in bed where Peter flips over the pillow and sees Billy Dee Williams' face, which proceeds to give a speech about how cool this side of the pillow is, ending with "Works every time." This joke alone would require a whole digression into Star Wars and how most of the stars of that series struggled to overcome their typecasting and couldn't find a lot of good work, so people like Billy Dee Williams ended up doing commercials for a malt liquor named Colt 45, which had these really cheesy commercials where Billy Dee talked like that. And so on. It could probably take an hour before my guest would even come close to understanding the context of the joke (I'm not even touching the tip of the iceberg with this post).

And the irony of this whole thing is that jokes that are explained simply aren't funny. To be honest, I'm not even sure why I find these simple gags funny (that, of course, is the joy of humor - you don't usually have to understand it or think about it, you just laugh). Seriously, why is it funny when Family Guy blatantly references some classic movie or show? Again, I'm not sure, but that sort of humor has been steadily growing over the past 30 years or so.

Not all comedies are that blatant about their referential humor though (indeed, Family Guy itself doesn't solely rely upon such references). A recent example of a good referential film is Shaun of the Dead, which somewhow manages to be both a parody and an example of a good zombie movie. It pays homage to all the classic zombie films and it also makes fun of other genres (notably the romantic comedy), but in doing so, the filmmakers have also made a good zombie movie in itself. The filmmakers have recently released a new film called Hot Fuzz, which attempts the same trick for action movies and buddy comedies. It is, perhaps, not as successful as Shaun, but the sheer number of references in the film is astounding. There are the obvious and explicit ones like Point Break and Bad Boys II, but there are also tons of subtle homages that I'd wager most people wouldn't get. For instance, when Simon Pegg yells in the movie, he's doing a pitch perfect impersonation of Arnold Schwarzenegger in Predator. And when he chases after a criminal, he imitates the way Robert Patrick's T-1000 runs from Terminator 2.

References don't need to be part of a comedy either (though comedies seem to make the easiest examples). Hop on IMDB and go to just about any recent movie, and click on the "Movie Connections" link in the left navigation. For instance, did you know that the aformentioned T2 references The Wizard of Oz and The Killing, amongst dozens of other references? Most of the time, these references are really difficult to pick out, especially when you're viewing a foreign film or show that's pulling from a different cultural background. References don't have to be story or character based - they can be the way a scene is composed or the way the lighting is set (i.e. the Venetian blinds in Noir films).

Now, this doesn't just apply to art either. A lot of common knowledge in today's world is referential. Most formal writing includes references and bibliographies, for instance, and a non-fiction book will often assume basic familiarity with a subject. When I was in school, I was always annoyed at the amount of rote memorization they made us do. Why memorize it if I could just look it up? Shouldn't you be focusing on my critical thinking skills instead of making me memorize arbitrary lists of facts? Sometimes this complaining was probably warranted, but most of it wasn't. So much of what we do in today's world requires a well-rounded familiarity with a large number of subjects (including history, science, culture, amongst many other things). There simply isn't any substitute for actual knowledge. Though it was a pain at the time, I'm glad emphasis was put on memorization during my education. A while back, David Foster noted that schools are actually moving away from this, and makes several important distinctions. He takes an example of a song:
Jakob Dylan has a song that includes the following lines:

Cupid, don't draw back your bow
Sam Cooke didn't know what I know


Think of how much you need to know in order to understand these two simple lines:

1)You need to know that, in mythology, Cupid symbolizes love
2)And that Cupid's chosen instrument is the bow and arrow
3)Also that there was a singer/songwriter named Sam Cooke
4)And that he had a song called which included the lines "Cupid, draw back your bow."

... "Progressive" educators, loudly and in large numbers, insist that students should be taught "thinking skills" as opposed to memorization. But consider: If it's not possible to understand a couple of lines from a popular song without knowing by heart the references to which it alludes--without memorizing them--what chance is there for understanding medieval history, or modern physics, without having a ready grasp of the topics which these disciplines reference?

And also consider: in the Dylan case, it's not just what you need to know to appreciate the song. It's what Dylan needed to know to create it in the first place. Had he not already had the reference points--Cupid, the bow and arrow, the Sam Cooke song--in his head, there's no way he would have been able to create his own lines. The idea that he could have just "looked them up," which educators often suggest is the way to deal with factual knowledge, would be ludicrous in this context. And it would also be ludicrous in the context of creating new ideas about history or physics.
As Foster notes, this doesn't mean that "thinking skills" are unimportant, just that knowledge is important too. You need to have a quality data set in order to use those "thinking skills" effectively.

Human beings tend to leverage knowledge to create new knowledge. This has a lot of implications, one of which is intellectual property law. Giving limited copyright to intellectual property is important, because the data in that property eventually becomes available for all to built upon. It's ironic that educators are considering less of a focus on memorization, as this requirement of referential knowledge has been increasing for some time. Students need a base of knowledge to both understand and compose new works. References help you avoid reinventing the wheel everytime you need to create something, which leads to my next point.

I think part of the reason references are becoming more and more common these days is that it makes entertainment a little less passive. Watching TV or a movie is, of course, a passive activity, but if you make lots of references and homages, the viewer is required to think through those references. If the viewer has the appropriate knowledge, such a TV show or movie becomes a little more cognitively engaging. It makes you think, it calls to mind previous work, and it forces you to contextualize what you're watching based on what you know about other works. References are part of the complexity of modern Television and film, and Steven Johnson spends a significant amout of time talking about this subject in his book Everything Bad is Good for You (from page 85 of my edition):
Nearly every extended sequence in Seinfeld or The Simpsons, however, will contain a joke that makes sense only if the viewer fills in the proper supplementary information -- information that is deliberately withheld from the viewer. If you haven't seen the "Mulva" episode, or if the name "Art Vandelay" means nothing to you, then the subsequent references -- many of them arriving years after their original appearance -- will pass on by unappreciated.

At first glance, this looks like the soap opera tradition of plotlines extending past the frame of individual episodes, but in practice the device has a different effect. Knowing that George uses the alias Art Vandelay in awkward social situations doesn't help you understand the plot of the current episode; you don't draw on past narratives to understand the events in the present one. In the 180 Seinfeld episodes that aired, seven contain references to Art Vandelay: in George's actually referring to himself with that alias or invoking the name as part of some elaborate lie. He tells a potential employer at a publishing house that he likes to read the fiction of Art Vandelay, author of Venetian Blinds; in another, he tells an unemployment insurance caseworker that he's applied for a latex salesman job at Vandelay Industries. For storytelling purposes, the only thing that you need to know here is that George is lying in a formal interview; any fictitious author or latex manufacturer would suffice. But the joke arrives through the echo of all those earlier Vandelay references; it's funny because it's making a subtle nod to past events held offscreen. It's what we'd call in a real-world context an "in-joke" -- a joke that's funny only to people who get the reference.
I know some people who hate Family Guy and Seinfeld, but I realized a while ago that they don't hate those shows because of the contents of the shows or because they were offended (though some people certainly are), but rather becaues they simply don't get the references. They didn't grow up watching TV in the 80s and 90s, so many of the references are simply lost on them. Family Guy would be particularly vexing if you didn't have the pop culture knowledge of the writers of that show. These reference heavy shows are also a lot easier to watch and rewatch, over and over again. Why? Because each episode is not self-contained, you often find yourself noticing something new every time you watch. This also sometimes works in reverse. I remember the first time I saw Bill Shatner's campy rendition of Rocket Man, I suddenly understoood a bit on Family Guy which I thought was just a bit based on being random (but was really a reference).

Again, I seem to be focusing on comedy, but it's not necessarily limited to that genre. Eric S. Raymond has written a lot about how science fiction jargon has evolved into a sophisticated code that implicitely references various ideas, conventions and tropes of the genre:
In looking at an SF-jargon term like, say, "groundcar", or "warp drive" there is a spectrum of increasingly sophisticated possible decodings. The most naive is to see a meaningless, uninterpretable wordlike noise and stop there.

The next level up is to recognize that uttering the word "groundcar" or "warp drive" actually signifies something that's important for the story, but to lack the experience to know what that is. The motivated beginning reader of SF is in this position; he must, accordingly, consciously puzzle out the meaning of the term from the context provided by the individual work in which it appears.

The third level is to recognize that "ground car" and "warp drive" are signifiers shared, with a consistent and known meaning, by many works of SF -- but to treat them as isolated stereotypical signs, devoid of meaning save inasmuch as they permit the writer to ratchet forward the plot without requiring imaginative effort from the reader.

Viewed this way, these signs emphasize those respects in which the work in which they appear is merely derivative from previous works in the genre. Many critics (whether through laziness or malice) stop here. As a result they write off all SF, for all its pretensions to imaginative vigor, as a tired jumble of shopworn cliches.

The fourth level, typical of a moderately experienced SF reader, is to recognize that these signifiers function by permitting the writer to quickly establish shared imaginative territory with the reader, so that both parties can concentrate on what is unique about their communication without having to generate or process huge expository lumps. Thus these "stereotypes" actually operate in an anti-stereotypical way -- they permit both writer and reader to focus on novelty.

At this level the reader begins to develop quite analytical habits of reading; to become accustomed to searching the writer's terminology for what is implied (by reference to previous works using the same signifiers) and what kinds of exceptions and novelties convey information about the world and the likely plot twists.

It is at this level, for example, that the reader learns to rely on "groundcar" as a tip-off that the normal transport mode in the writer's world is by personal flyer. At this level, also, the reader begins to analytically compare the author's description of his world with other SFnal worlds featuring personal flyers, and to recognize that different kinds of flyers have very different implications for the rest of the world.

For example, the moderately experienced reader will know that worlds in which the personal fliers use wings or helicopter-like rotors are probably slightly less advanced in other technological ways than worlds in which they use ducted fans -- and way behind any world in which the flyers use antigravity! Once he sees "groundcar" he will be watching for these clues.

The very experienced SF reader, at the fifth level, can see entire worlds in a grain of jargon. When he sees "groundcar" he associates to not only technical questions about flyer propulsion but socio-symbolic ones but about why the culture still uses groundcars at all (and he has a reportoire of possible answers ready to check against the author's reporting). He is automatically aware of a huge range of consequences in areas as apparently far afield as (to name two at random) the architectural style of private buildings, and the ecological consequences of accelerated exploitation of wilderness areas not readily accessible by ground transport.
While comedy makes for convenient examples, I think this better illustrates the cognitive demands of referential art. References require you to be grounded in various subjects, and they'll often require you to think through the implications of those subjects in a new context. References allow writers to pack incredible amounts of information into even the smallest space. This, of course, requires the consumer to decode that information (using available knowledge and critical thinking skills), making the experience less passive and more engaging. Use references will continue to flourish and accellerate in both art and scholarship, and new forms will emerge. One could even argue that aggregation in various weblogs are simply exercises in referential work. Just look at this post, in which I reference several books and movies, in many cases assuming familiarity. Indeed, the whole structure of the internet is based on the concept of links -- essentialy a way to reference other documents. Perhaps this is part of the cause of the rising complexity and information density of modern entertainment. We can cope with it now, because we have such systems to help us out.
Posted by Mark on June 10, 2007 at 03:08 PM .: link :.


End of This Day's Posts

Wednesday, May 09, 2007

Beverage Blogging
Last week, I hastily threw together a post on Coke, including some thoughts on Coke vs. Pepsi, the advertising of both brands, and Passover Coke. I've run across several people commenting on my post or similar issues over the past week.
  • Diet Coke Zero Prime Plus: Aziz comments on the Coke/Pepsi rivalry and also talks a little about other varieties of coke (Diet Coke, Coke Zero, Coke Plus, Diet Coke with Splenda, etc...)
  • The Other Red vs. Blue: Shamus explains why he usually doesn't drink Coke and points out that Coke has the best ads, referring to the GTA parody commercial (which is brilliant).
  • Mexican Coke at the Costco: Last week, I mentioned that there is clearly a market for Coke made with real cane sugar, and apparently Costco agrees. They've taken to importing Mexican Coke, which also uses cane sugar:
    Costco has conformed to CA and U.S. rules, such as CRV (the sort-of deposit you pay for the bottle) and "nutrition" labeling, so everything appears to be nice and legal. Of course you could always get your sugar water fix at some smaller grocers or taquerias by buying surprisingly expensive "bootlegged" bottles one at a time, but Costco will let Cokeheads stock up by the case at a relatively low price.
    The Mexican Coke adds another wrinkle into the mix: they come in glass bottles, which supposedly make the coke taste better. I'm going to need to stock up on some regular Coke, Passover Coke, Mexican Coke, and sure, let's throw some Pepsi into the mix, and do a double blind test to see which cola tastes the best. Alas, this will have to wait for next year... [link via Kottke]
  • Tall Men: Australia is good at making beer ads: Alex sidesteps the issue and points to a great Aussie beer commercial featuring none other than.... Wacky Waving Inflatable Arm Flailing Tube Man, Wacky Waving Inflatable Arm Flailing Tube Man, Wacky Waving Inflatable Arm Flailing Tube Man!!!!!!! Sorry. It's a Family Guy thing.
  • But who cares about Coke or regular beer when you can brew yourself some Skittlebrau!
  • Speaking of brewing beer, Johno over at the Ministry of Minor Perfidy has been home brewing beer. I'd really like to try his Belgian ale, which he named Trogdor The Burninator "Consummate V" Belgian Strongbad Ale. Considering the price of good Belgian beer (and Belgian style beers, see below), home brewing might be a good activity for me to try out.
And speaking of beer, I spent the previous weekend in Cooperstown. Sure, we visited the Baseball Hall of Fame Mvsevm, but the highlight of the trip for me was a visit to the Brewery Ommegang. It's a surprisingly small operation, but that makes sense when you realize that it's an expensive Belgian-style microbrew. I'm not a beer expert, but I think I've tried more varieties than your average person, and these are my absolute favorite beers of all time. Ommegang only makes 5 varieties, but they are all fantastic. Alas, you have to pay for that quality, but it's worth it. In any case, the tour ends with a beer tasting and you can buy some beer at a slight discount, which I did, giving me this:

Beer!

Awesome. Ok, I cheated a little. I already had the normal size bottles on the left, but still, that's an impressive array of beer. Looks like I've got some work to do!
Posted by Mark on May 09, 2007 at 09:54 PM .: Comments (3) | link :.


End of This Day's Posts

Wednesday, May 02, 2007

Link Dump: Coca-Cola Edition
I love Coca-Cola. I hate Pepsi. I probably wouldn't feel like that if it weren't for my parents. My brother prefers Pepsi. For reasons beyond my understanding, my parents nurtured this conflict. This is strange, since they generally just bought what was on sale (and we were growing up during the whole cola wars episode, so there were lots of sales). This manifested in various ways throughout the years, but the end result is that our preferences polarized. When I go to a restaurant and ask for a "Coke" and they ask if Pepsi is ok, I generally change my order to something else (root beer, water, etc...) Now, I'm not rude or even very confrontational about it, but this guy sure is:
"I'd like a Coca-Cola, please," I told the waiter.

"Will Pepsi be OK?" he replied.

"No, I'd like a Coke," I said.

"We serve only Pepsi products," he stammered.

"Does anyone ever ask for a Coke?" I asked.

"All the time," he said, "but we serve Pepsi."

"Could you run down to the 7-11 and get me a Coke -- they have plenty over there?" I asked with a smile.
Now, I've seen people say "No, Pepsi is not ok," but asking for the waitress to run down to the 7-11 is pure, diabolical genius. Still, most of us Coke fiends aren't rude about our preferences. Take John Scalzi, who wrote a great Essay on Coca-Cola a while ago, and delved into the advertising of Coke and Pepsi:
I think there really is something to how Coke positions itself. One hates to admit that one is influenced by corporate branding -- it means that those damned advertisers actually managed to do their job -- but what can you say. It works. Since Coke is the market leader, it doesn't spend any time as far as I can see banging on Pepsi or other brands; its ads stick to their knitting, which is making sure that people feel that Coke is part of everyday life -- and at some point during your day, you're probably going to have a Coke. It's inevitable. And hey -- that's okay. That's as it should be, in fact. I don't know that I would call Coke's ads soft sells (after all, they brand the product literally up the wazoo), but I don't find the advertising utterly annoying.

Which brings us back to Pepsi. Pepsi is eternally positioning itself as the outsider -- "Pepsi Generation," "Generation Next," so on and so forth. Always young, always fun, always mildly rebellious, yadda yadda yadda. Since one goes in knowing that Pepsi is a multibillion-dollar corporation, I've always found the rebellion angle amusing (and not just in Pepsi's case -- if you're a company that's big enough to advertise your wares every single day on national networks, you've gotten just a bit beyond being the rebel's choice, now, haven't you?). Being a rebel doesn't really work for me -- most of what is positioned as being a rebel is actually not rebellion, merely sullenness and inarticulateness. And really, I'm just too bourgeois for that at this point in my life. ... Besides, Pepsi can't seem to advertise itself without bringing up the point that Coke exists, and is the better-selling brand.
And it goes on for a bit too. Great article.

This year, I learned about the existence of Passover Coke. The current Coke formula uses corn syrup as a sweetener because it's cheaper than pure cane sugar, but since it's not Kosher to eat corn during Passover, Coke makes some special batches of cola using pure cane sugar. It's only available in limited quantities for a few weeks a year (you can tell because it's got a yellow cap and Hebrew writing on it). I didn't get a chance to do a taste test this year, but Widge did, and he says that people prefer Passover Coke to regular Coke. This, of course, leads him to make the obvious suggestion:
Look. I know it's easier to work with and cheaper and all that good stuff. But let's face it: consumers are trying to get away from the high fructose stuff. I don't pretend to even understand all the health controversy that's going on, I tried to read up on the Wikipedia article before writing this and it mentioned "plasma triacylglycerol" and my eyes sort of glazed over (mmmm, glaze). It sounds like something the crew of Star Trek Voyager would seek out while being chased by cauliflower-headed aliens. But forget all that: it just freaking tastes better. That's all I care about, because if I was really concerned about my health, why would I be drinking Coke?

No offense.

Anyway, it's obvious you can make the stuff. It's obvious there's a market. I know just what to do: make a huge deal about how you believe in consumer choice and the market deciding things and release it as Coca-Cola Prime. Hell, if it's more expensive, charge more for it. Think about it: GET PRIMED WITH COKE. See? I'm giving you a campaign for free!
I'd buy it. Good stuff.
Posted by Mark on May 02, 2007 at 10:03 PM .: Comments (3) | link :.


End of This Day's Posts

Wednesday, April 18, 2007

Link Dump: Awesome Pictures Edition
Yes, time is still short these days, so just a few links featuring lots and lots of pictures: That's all for now. Sorry for the lameness of recent bloggery, but again, time is short.
Posted by Mark on April 18, 2007 at 10:37 PM .: Comments (2) | link :.


End of This Day's Posts

Wednesday, March 14, 2007

Mental Inertia
As I waded through dozens of recommendations for Anime series (thanks again to everyone who contributed), I began to wonder about a few things. Anime seems to be a pretty vast subject and while I had touched the tip of the iceberg in the past, I really didn't have a good feel for what was available. So I asked for recommendations, and now I'm on my way. But it's not like I just realized that I wanted to watch more Anime. I've wanted to do that for a little while, but I've only recently acted on it. What took so long? Why is it so hard to get started?

This isn't something that's limited to deciding what to watch either. I find that just getting started is often the most difficult part of a task (or, at least, the part I seem to get stuck on the most). Sometimes it's difficult to deal with the novelty of a thing, other times a project seems completely overwhelming. But after I've begun, things don't seem so novel or overwhelming anymore. I occasionally find myself hesitant to start a new book or load up a new video game, but once I do, things flow pretty easily (unless the book or game is a really bad one). I have a bunch of ideas for blog posts that I never get around to attacking, but usually once I start writing, ideas flow much more readily. At work, I'll sometimes find myself struggling to get started on a task, but once I get past that initial push, I'm fine. Sure, there are excuses for all of these (interruptions, email, and meetings, for instance), but while they are sometimes true obstacles, they often strike me as rationalizations. Just getting started is the problem, but once I get into the flow, it's easy to keep going.

Joel Spolsky wrote an excellent essay on the subject called Fire and Motion:
Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I've got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don't realize that it's already 7:30 pm.

Somewhere between step 8 and step 9 there seems to be a bug, because I can't always make it across that chasm.For me, just getting started is the only hard thing. An object at rest tends to remain at rest. There's something incredible heavy in my brain that is extremely hard to get up to speed, but once it's rolling at full speed, it takes no effort to keep it going.
It's an excellent point, and there does seem to be some sort of mental inertia at work here. But why? Why is it so difficult to get started?

When I think about this, I realize that this is a relatively new phenomenon for me. I don't remember having this sort of difficulty ten years ago. What's different? Well, I'm ten years older. The conventional wisdom is that it becomes more difficult to learn new things (i.e. to start something new) as you get older. There is some supporting evidence having to do with how the human brain becomes less malleable with time, but I'm not sure that paints the full picture. I think a big part of the problem is that as I got older, my standards rose.

Let me back up for a moment. A few years ago, a friend attempted to teach me how to drive a stick. I'd driven a automatic transmission my whole life up until that point, so the process of learning a manual transmission proved to be a challenging one. The actual mechanics of it are pretty straightforward and easily internalized. Sitting down and actually doing it, though, was another story. Intellectually, I knew what was going on, but it can be a little difficult to overcome muscle memory. I had a lot of trouble at first (and since I haven't driven a stick since then, I'd probably still have a lot of trouble today) and got extremely frustrated. My friend (who had gone through the same thing herself) laughed at it, making my lack of success even more infuriating. Eventually she explained to me that it wasn't that I was doing a bad job. It was that I was so used to being able to pick up something new and run with it, that when I had to do something extra challenging that took a little longer to pick up, I became frustrated. In short, I had higher standards for myself than I should have.

I think, perhaps, that's why it's difficult to start something new. It's not that learning has become harder, it's that I've become less tolerant of failure. My standards are higher, and that will sometimes make it hard to start something. This post, for example, has been brewing in my head for a while, but I had trouble getting started. This happens all the time, and I've actually got a bunch of ideas for posts stashed away somewhere. I've even written about this before, though only in a tangential way:
This weblog has come a long way over the three and a half years since I started it, and at this point, it barely resembles what it used to be. I started out somewhat slowly, just to get an understanding of what this blogging thing was and how to work it (remember, this was almost four years ago and blogs weren't nearly as common as they are now), but I eventually worked up into posting about once a day, on average. At that time, a post consisted mainly of a link and maybe a summary or some short commentary. Then a funny thing happened, I noticed that my blog was identical to any number of other blogs, and thus wasn't very compelling. So I got serious about it, and started really seeking out new and unusual things. I tried to shift focus away from the beaten path and started to make more substantial contributions. I think I did well at this, but it couldn't really last. It was difficult to find the offbeat stuff, even as I poured through massive quantities of blogs, articles and other information (which caused problems of it's own). I slowed down, eventually falling into an extremely irregular posting schedule on the order of once a month, which I have since attempted to correct, with, I hope, some success. I recently noticed that I have been slumping somewhat, though I'm still technically keeping to my schedule.
Part of the reason I was slumping back then was that my standards were rising again. The problem is that I want what I write to turn out good, and my standards are high (relatively speaking - this is only a blog, after all). So when I sit down to write, I wonder if I'll actually be able to do the subject justice. At a certain point, though, you just have to pull the trigger and get started. The rest comes naturally. Is this post better than I had imagined? Probably not, but then, if I waited until it was perfect, I'd never post anything (and plus, that sorta defeats the purpose of blogging).

One of the things I've noticed since changing my schedule to post at least twice a week is that it forces me to lower my standards a bit, just so that I can get something out on time. Back when I started the one post a week schedule, I found that those posts were getting pretty long. I thought they were pretty good too, but as time went on, I wasn't able to keep up with my rising expectations. There's nothing inherently wrong with high expectations, but I've found it's good every now and again to adjust course. Even a well made clock drifts and must be calibrated from time to time, and so we must calibrate ourselves from time to time as well.

Update 3.15.07: It occurs to me that this post is overly-serious and may give you the wrong idea. In the comments, Pete notes that watching Anime is supposed to be fun. I agree wholeheartedly, and I didn't mean to imply differently. The same goes for blogging - I wrote a decent amount in this post about how blogging is difficult for me, but that's not really the right way to put it. I enjoy blogging too, that's why I do it. Sometimes I overthink things, and that's probably what I was doing in this post, but I think the main point holds. Learning can be impaired by high standards.
Posted by Mark on March 14, 2007 at 08:14 PM .: Comments (3) | link :.


End of This Day's Posts

Wednesday, February 14, 2007

Intellectual Property, Copyright and DRM
Roy over at 79Soul has started a series of posts dealing with Intellectual Property. His first post sets the stage with an overview of the situation, and he begins to explore some of the issues, starting with the definition of theft. I'm going to cover some of the same ground in this post, and then some other things which I assume Roy will cover in his later posts.

I think most people have an intuitive understanding of what intellectual property is, but it might be useful to start with a brief definition. Perhaps a good place to start would be Article 1, Section 8 of the U.S. Constitution:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;
I started with this for a number of reasons. First, because I live in the U.S. and most of what follows deals with U.S. IP law. Second, because it's actually a somewhat controversial stance. The fact that IP is only secured for "limited times" is the key. In England, for example, an author does not merely hold a copyright on their work, they have a Moral Right.
The moral right of the author is considered to be -- according to the Berne convention -- an inalienable human right. This is the same serious meaning of "inalienable" the Declaration of Independence uses: not only can't these rights be forcibly stripped from you, you can't even give them away. You can't sell yourself into slavery; and neither can you (in Britain) give the right to be called the author of your writings to someone else.
The U.S. is different. It doesn't grant an inalienable moral right of ownership; instead, it allows copyright. In other words, in the U.S., such works are considered property (i.e. it can be sold, traded, bartered, or given away). This represents a fundamental distinction that needs to be made: some systems emphasize individual rights and rewards, and other systems are more limited. When put that way, the U.S. system sounds pretty awful, except that it was designed for something different: our system was built to advance science and the "useful arts." The U.S. system still rewards creators, but only as a means to an end. Copyright is granted so that there is an incentive to create. However, such protections are only granted for "limited Times." This is because when a copyright is eternal, the system stagnates as protected peoples stifle competition (this need not be malicious). Copyright is thus limited so that when a work is no longer protected, it becomes freely available for everyone to use and to build upon. This is known as the public domain.

The end goal here is the advancement of society, and both protection and expiration are necessary parts of the mix. The balance between the two is important, and as Roy notes, one of the things that appears to have upset the balance is technology. This, of course, extends as far back as the printing press, records, cassettes, VHS, and other similar technologies, but more recently, a convergence between new compression techniques and increasing bandwidth of the internet created an issue. Most new recording technologies were greeted with concern, but physical limitations and costs generally put a cap on the amount of damage that could be done. With computers and large networks like the internet, such limitations became almost negligible. Digital copies of protected works became easy to copy and distribute on a very large scale.

The first major issue came up as a result of Napster, a peer-to-peer music sharing service that essentially promoted widespread copyright infringement. Lawsuits followed, and the original Napster service was shut down, only to be replaced by numerous decentralized peer-to-peer systems and darknets. This meant that no single entity could be sued for the copyright infringement that occurred on the network, but it resulted in a number of (probably ill-advised) lawsuits against regular folks (the anonymity of internet technology and state of recordkeeping being what it is, this sometimes leads to hilarious cases like when the RIAA sued a 79 year old guy who doesn't even own a computer or know how to operate one).

Roy discusses the various arguments for or against this sort of file sharing, noting that the essential difference of opinion is the definition of the word "theft." For my part, I think it's pretty obvious that downloading something for free that you'd normally have to pay for is morally wrong. However, I can see some grey area. A few months ago, I pre-ordered Tool's most recent album, 10,000 Days from Amazon. A friend who already had the album sent me a copy over the internet before I had actually recieved my copy of the CD. Does this count as theft? I would say no.

The concept of borrowing a Book, CD or DVD also seems pretty harmless to me, and I don't have a moral problem with borrowing an electronic copy, then deleting it afterwords (or purchasing it, if I liked it enough), though I can see how such a practice represents a bit of a slippery slope and wouldn't hold up in an honest debate (nor should it). It's too easy to abuse such an argument, or to apply it in retrospect. I suppose there are arguments to be made with respect to making distinctions between benefits and harms, but I generally find those arguments unpersuasive (though perhaps interesting to consider).

There are some other issues that need to be discussed as well. The concept of Fair Use allows limited use of copyrighted material without requiring permission from the rights holders. For example, including a screenshot of a film in a movie review. You're also allowed to parody copyrighted works, and in some instances make complete copies of a copyrighted work. There are rules pertaining to how much of the copyrighted work can be used and in what circumstances, but this is not the venue for such details. The point is that copyright is not absolute and consumers have rights as well.

Another topic that must be addressed is Digital Rights Management (DRM). This refers to a range of technologies used to combat digital copying of protected material. The goal of DRM is to use technology to automatically limit the abilities of a consumer who has purchased digital media. In some cases, this means that you won't be able to play an optical disc on a certain device, in others it means you can only use the media a certain number of times (among other restrictions).

To be blunt, DRM sucks. For the most part, it benefits no one. It's confusing, it basically amounts to treating legitimate customers like criminals while only barely (if that much) slowing down the piracy it purports to be thwarting, and it's lead to numerous disasters and unintended consequences. Essential reading on this subject is this talk given to Microsoft by Cory Doctorow. It's a long but well written and straightforward read that I can't summarize briefly (please read the whole thing). Some details of his argument may be debateable, but as a whole, I find it quite compelling. Put simply, DRM doesn't work and it's bad for artists, businesses, and society as a whole.

Now, the IP industries that are pushing DRM are not that stupid. They know DRM is a fundamentally absurd proposition: the whole point of selling IP media is so that people can consume it. You can't make a system that will prevent people from doing so, as the whole point of having the media in the first place is so that people can use it. The only way to perfectly secure a piece of digital media is to make it unusable (i.e. the only perfectly secure system is a perfectly useless one). That's why DRM systems are broken so quickly. It's not that the programmers are necessarily bad, it's that the entire concept is fundamentally flawed. Again, the IP industries know this, which is why they pushed the Digital Millennium Copyright Act (DMCA). As with most laws, the DMCA is a complex beast, but what it boils down to is that no one is allowed to circumvent measures taken to protect copyright. Thus, even though the copy protection on DVDs is obscenely easy to bypass, it is illegal to do so. In theory, this might be fine. In practice, this law has extended far beyond what I'd consider reasonable and has also been heavily abused. For instance, some software companies have attempted to use the DMCA to prevent security researchers from exposing bugs in their software. The law is sometimes used to silence critics by threatening them with a lawsuit, even though no copright infringement was committed. The Chilling Effects project seems to be a good source for information regarding the DMCA and it's various effects.

DRM combined with the DMCA can be stifling. A good example of how awful DRM is, and how DMCA can affect the situation is the Sony Rootkit Debacle. Boing Boing has a ridiculously comprehensive timeline of the entire fiasco. In short, Sony put DRM on certain CDs. The general idea was to prevent people from putting the CDs in their computer and ripping them to MP3s. To accomplish this, Sony surreptitiously installed software on customer's computers (without their knowledge). A security researcher happened to notice this, and in researching the matter found that the Sony DRM had installed a rootkit that made the computer vulnerable to various attacks. Rootkits are black-hat cracker tools used to disguise the workings of their malicious software. Attempting to remove the rootkit broke the windows installation. Sony reacted slowly and poorly, releasing a service pack that supposedly removed the rootkit, but which actually opened up new security vulnerabilities. And it didn't end there. Reading through the timeline is astounding (as a result, I tend to shy away from Sony these days). Though I don't believe he was called on it, the security researcher who discovered these vulnerabilities was technically breaking the law, because the rootkit was intended to protect copyright.

A few months ago, my windows computer died and I decided to give linux a try. I wanted to see if I could get linux to do everything I needed it to do. As it turns out, I could, but not legally. Watching DVDs on linux is technically illegal, because I'm circumventing the copy protection on DVDs. Similar issues exist for other media formats. The details are complex, but in the end, it turns out that I'm not legally able to watch my legitimately purchased DVDs on my computer (I have since purchased a new computer that has an approved player installed). Similarly, if I were to purchase a song from the iTunes Music Store, it comes in a DRMed format. If I want to use that format on a portable device (let's say my phone, which doesn't support Apple's DRM format), I'd have to convert it to a format that my portable device could understand, which would be illegal.

Which brings me to my next point, which is that DRM isn't really about protecting copyright. I've already established that it doesn't really accomplish that goal (and indeed, even works against many of the reasons copyright was put into place), so why is it still being pushed? One can only really speculate, but I'll bet that part of the issue has to do with IP owners wanting to "undercut fair use and then create new revenue streams where there were previously none." To continue an earlier example, if I buy a song from the iTunes music store and I want to put it on my non-Apple phone (not that I don't want one of those), the music industry would just love it if I were forced to buy the song again, in a format that is readable by my phone. Of course, that format would be incompatible with other devices, so I'd have to purchase the song again if I wanted to listen to it on those devices. When put in those terms, it's pretty easy to see why IP owners like DRM, and given the general person's reaction to such a scheme, it's also easy to see why IP owners are always careful to couch the debate in terms of piracy. This won't last forever, but it could be a bumpy ride.

Interestingly enough, distributers of digital media like Apple and Yahoo have recently come out against DRM. For the most part, these are just symbolic gestures. Cynics will look at Steve Jobs' Thoughts on Music and say that he's just passing the buck. He knows customers don't like or understand DRM, so he's just making a calculated PR move by blaming it on the music industry. Personally, I can see that, but I also think it's a very good thing. I find it encouraging that other distributers are following suit, and I also hope and believe this will lead to better things. Apple has proven that there is a large market for legally purchased music files on the internet, and other companies have even shown that selling DRM-free files yields higher sales. Indeed, the emusic service sells high quality, variable bit rate MP3 files without DRM, and it has established emusic as the #2 retailer of downloadable music behind the iTunes Music Store. Incidentally, this was not done for pure ideological reasons - it just made busines sense. As yet, these pronouncements are only symbolic, but now that online media distributers have established themselves as legitimate businesses, they have ammunition with which to challenge the IP holders. This won't happen overnight, but I think the process has begun.

Last year, I purchased a computer game called Galactic Civilizations II (and posted about it several times). This game was notable to me (in addition to the fact that it's a great game) in that it was the only game I'd purchased in years that featured no CD copy protection (i.e. DRM). As a result, when I bought a new computer, I experienced none of the usual fumbling for 16 digit CD Keys that I normally experience when trying to reinstall a game. Brad Wardell, the owner of the company that made the game, explained his thoughts on copy protection on his blog a while back:
I don't want to make it out that I'm some sort of kumbaya guy. Piracy is a problem and it does cost sales. I just don't think it's as big of a problem as the game industry thinks it is. I also don't think inconveniencing customers is the solution.
For him, it's not that piracy isn't an issue, it's that it's not worth imposing draconian copy protection measures that infuriate customers. The game sold much better than expected. I doubt this was because they didn't use DRM, but I can guarantee one thing: People don't buy games because they want DRM. However, this shows that you don't need DRM to make a successful game.

The future isn't all bright, though. Peter Gutmann's excellent Cost Analysis of Windows Vista Content Protection provides a good example of how things could get considerably worse:
Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called "premium content", typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it's not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server).
This is infuriating. In case you can't tell, I've never liked DRM, but at least it could be avoided. I generally take articles like the one I'm referencing with a grain of salt, but if true, it means that the DRM in Vista is so oppressive that it will raise the price of hardware… And since Microsoft commands such a huge share of the market, hardware manufacturers have to comply, even though a some people (linux users, Mac users) don't need the draconian hardware requirements. This is absurd. Microsoft should have enough clout to stand up to the media giants, there's no reason the DRM in Vista has to be so invasive (or even exist at all). As Gutmann speculates in his cost analysis, some of the potential effects of this are particularly egregious, to the point where I can't see consumers standing for it.

My previous post dealt with Web 2.0, and I posted a YouTube video that summarized how changing technology is going to force us to rethink a few things: copyright, authorship, identity, ethics, aesthetics, rhetorics, governance, privacy, commerce, love, family, ourselves. All of these are true. Earlier, I wrote that the purpose of copyright was to benefit society, and that protection and expiration were both essential. The balance between protection and expiration has been upset by technology. We need to rethink that balance. Indeed, many people smarter than I already have. The internet is replete with examples of people who have profited off of giving things away for free. Creative Commons allows you to share your content so that others can reuse and remix your content, but I don't think it has been adopted to the extent that it should be.

To some people, reusing or remixing music, for example, is not a good thing. This is certainly worthy of a debate, and it is a discussion that needs to happen. Personally, I don't mind it. For an example of why, watch this video detailing the history of the Amen Break. There are amazing things that can happen as a result of sharing, reusing and remixing, and that's only a single example. The current copyright environment seems to stifle such creativity, not the least of which because copyright lasts so long (currently the life of the author plus 70 years). In a world where technology has enabled an entire generation to accellerate the creation and consumption of media, it seems foolish to lock up so much material for what could easily be over a century. Despite all that I've written, I have to admit that I don't have a definitive answer. I'm sure I can come up with something that would work for me, but this is larger than me. We all need to rethink this, and many other things. Maybe that Web 2.0 thing can help.

Update: This post has mutated into a monster. Not only is it extremely long, but I reference several other long, detailed documents and even somewhere around 20-25 minutes of video. It's a large subject, and I'm certainly no expert. Also, I generally like to take a little more time when posting something this large, but I figured getting a draft out there would be better than nothing. Updates may be made...

Update 2.15.07: Made some minor copy edits, and added a link to an Ars Technica article that I forgot to add yesterday.
Posted by Mark on February 14, 2007 at 11:44 PM .: link :.


End of This Day's Posts

Wednesday, January 31, 2007

Samoas versus Caramel deLites
My favorite Girl Scout cookies are unquestionably the Samoas (Thin Mints and Tagalongs are also quite good, but nothing compares to the mighty Samoa). Several years ago, I went to purchase a box and was surprised to learn that they changed the name to Caramel deLites. And they seemed to taste different too! It didn't take long to notice that Samoas were still being sold, and as it turns out, there are two commercial bakeries that are licensed to make Girl Scout cookies. Little Brownie Bakers have the strange names that we are nonetheless familiar with: Samoas, Tagalongs, Do-si-dos, Trefoils, etc... ABC Bakers are much more prosaic and descriptive: Caramel deLites, Peanut Butter Patties, Peanut Butter Sandwiches, Shortbread, etc...

Generally, both bakeries are pretty good, but the question is, what are the differences and which are better? Let's take a look at Samoas versus Caramel deLites.

Caramel deLites and Samoas


The Caramel deLites are on the left, and the Samoas are on the right. As you can see, the Caramel deLites have a somewhat lighter color to them, and that's partially because they use milk chocolate as opposed to dark chocolate. Wikipedia says they don't have as much caramel as Samoas, but I'm not sure about that. Personally, I think they're chewier than Samoas, and if I had to choose, I'd choose Samoas. But maybe I'm just weird. I asked around, and there didn't seem to be a consensus. Some people loved one variety, others loved the other, most were indifferent.

So I did a test. I put one box of each on my desk, removed any identification, and put a note up that asked people to try one of each and vote for their preferred cookie. This was a single blind test, and the cookies were labeled only A and B. Ok, so it was hardly a stringent methodology and a lot of people knew which were which just by looking at them, but in the end, it appears that Samoas have a slight edge. A sample size of 8 people is statistically significant enough for me, and it came out 5-3 in favor of Samoas. So there, Samoas are empirically better than Caramel deLites. It's scientific!

A couple of us also compared the Thin Mints (which are the only ones I know of that have the same name no matter what baker), but results were mixed. The cookies are clearly different, and the ABC Bakers (the ones with the prosaic names) Thin Mint actually seems more minty, but they're both pretty good. No stats for this one, but anecdotal evidence suggests that people like the ABC Bakers version better. So there you go. They're both good.

Incidentally, if you can get your hands on Edy's® Girl Scouts® Samoas® Cookie Ice Cream, I highly recommend stocking up. It's available slightly longer than the cookies are, but it'll be gone by March, and it's quite possible the greatest ice cream ever created.
Posted by Mark on January 31, 2007 at 09:16 PM .: Comments (7) | link :.


End of This Day's Posts

Wednesday, January 03, 2007

Japanese Cootie Shots
One of the things that interests me about foreign films is the way various aspects of culture become lost in the translation to English. In some cases, this is due to the literal translation of dialogue, but in others it's due to a physical mannerism or custom that simply can't be translated. In a post about Lain's Bear Pajamas in the Anime series Serial Experiments Lain, I mention an example of such a gesture that appears in Miyazaki's Spirited Away. Of course, I got the details of the gesture completely wrong in that post, but the general concept is similar. Since Spirited Away is the next film in the Animation Marathon, I got the DVD and took some screenshots. The main character, a little girl named Chihiro, steps on a little black slug and the boiler room man, Kamaji, says that this is gross and will bring bad luck. So she turns around and puts her thumbs and forefingers together while he pushes his hand through (click the images for a larger version).

Chihiro
Chihiro
Chihiro

Now this is obviously some sort of gesture meant to counteract bad luck, but it's a little strange. The dialogue in the scene helps, though the subtitles and the dubbing differ considerably (as I have been noticing lately). The subtitled version goes like this:
KAMAJI: Gross, gross, Sen! Totally gross!
(CHIHIRO puts her hands in the shape of a rectangle.)
KAMAJI (pushing his hand through the rectangle): Clean!
Quite sparse, though the meaning is relatively clear. The dubbed version expands on the concept a little more:
KAMAJI: You killed it! Those things are bad luck. Hurry, before it rubs off on you! Put your thumbs and forefingers together.
(CHIHIRO puts her hands in the shape of a rectangle.)
KAMAJI (pushing his hand through the rectangle): Evil... begone!
I noticed this gesture the first time I saw the movie, because I thought it was stange and figured that there had to be a little more to it than what was really being translated. On the DVD there is a little featurette called The Art of 'Spirited Away' and in one of the sections, the translators mention that they were baffled by the gesture, and weren't sure how to translate it. After researching the issue, they concluded that it's essentially the Japanese equivalent to a cootie shot. Of course, this makes a lot of sense, and it's totally something a kid would do in response to stepping on something gross (this film, like many of Miyazaki's other films, seems to nail a lot of the details of what it's like to be a kid). It also illustrates that the boiler room man isn't quite as gruff as he appears, and that he even has a bit of a soft spot for children. Interestingly enough, this gesture is repeated again by a little mouse (I think it's a mouse), and the soot balls that work in the boiler room, though I don't remember that (I'll try to grab screenshots when I rewatch the whole film)

Again, Spirited Away is the next film in the Animation Marathon, and it's probably the best of the bunch as well. Expect a full review soon, though I'm not sure how detailed it will be. Filmspotting (the podcast that's actually running the marathon) is on a bit of a break from the marathon, as they're doing their obligatory 2006 wrap up shows and best of the year lists.
Posted by Mark on January 03, 2007 at 11:50 PM .: Comments (5) | link :.


End of This Day's Posts

Sunday, December 24, 2006

Merry Christmas
In the future, pine trees will be extinct, and then what will we do for Christmas trees? We'll use a cactus. I present you with this year's Traditional Kaedrin Christmas Cactus:

Traditional Kaedrin Christmas Cactus

The picture didn't turn out as well as last year (it keeps coming out fuzzy for some reason, perhaps because of all the extra lights or because of the lighting - hey look, a handy guide for taking pictures of Christmas lights), but it'll do well enough.

Moving on, a few other christmas links for your enjoyment: That's all for now. Go forth, and watch your Anime.
Posted by Mark on December 24, 2006 at 10:52 PM .: Comments (0) | link :.


End of This Day's Posts

Tuesday, December 19, 2006

It was only a fantasy...
I've never been much of a sports fan, but in recent years I have become a fantasy sports fan. The funny thing about fantasy sports is that it totally distorts the importance of events in games. Take, for instance, last week's Monday Night Football game. We were nearing playoff time in fantasy football. My roommate and I were dominating the league, and had clinched playoff spots. There was one other team with a winning record who had also clinched. And there were 2 teams in contention for the final playoff spot. It's a head-to-head league, and I was playing one of the 2 teams. Due to some bad performances by key members of my team (*cough, cough, Tom Brady, cough*), I was down by 5 points by the end of the Sunday games. He had no players remaining, but I had 1 person playing in the Monday night football game. There's just one problem: he's a kicker - not a position known for high scoring. A kicker gets 1 fantasy point for every extra point they kick, and field goals can be 3-6 points (depending on how far the kick is from). So basically, what you had last week was 4 or 5 people throughout the northeast intensely following and rooting for (or against)... a kicker.
Me: They're in field goal range! Call in Wilkins!
Roommate: Dude, it's second down. I don't think they're going to kick it.
As luck would have it, I lost. However, I was still in the playoffs and I ended up playing the same person I would have played anyway. Alas, it appears that my team peaked early. After going 12-1 during the first 13 weeks of play, I've gone 0-2 in the past two weeks. I lost in the first round of the playoffs. There may still be some hope for placing third place, but I must concede that my season didn't end the way I planned. The main culprit here was injuries, as my top Wide Reciever and another solid Running Back both went down in recent weeks, thus weakening my team considerably. Nevertheless, I bear my team no ill will, and so I'll let the Badgers take a bow:
  • Tom Brady: (QB) In some ways, he's been a bit of a disappointment, but in reality, he's done about as well as I could have ever hoped. Quarterback was a tough position to fill this year, what with all the underperforming stars and former stars and rookies and injuries. There were probably only a handful of consistent performers, and a couple of abominable weeks aside, Brady was one of them.
  • Larry Johnson: (RB) At the start of the season, there were really only 3 elite running backs to get, and LJ was one of them. I was fortunate enough to get the second overall pick in the draft, so I was able to get him (Ironically, the 3 backs were drafted in opposite order of eventual performance). Overshadowed by the obscenely dominant LaDainian Tomlinson (who has already scored a record breaking 33 touchdowns, and he still has two games left in the season), Johnson was actually my leading scorer.
  • Kevin Jones: Apparently, this guy went to my high school. Go figure. In any case, for most of the year, he was my surprisingly productive second back (surprising in that, you know, he plays for the Lions).
  • Ahman Green: (RB) He made a nice third back option when I needed him, and managed to fill in well for Jones when the injuries started coming. He spent a decent portion of the season on the bench, and I got him very late in the draft, so I was pretty happy.
  • Larry Fitzgerald: (WR) He was supposed to be my premier receiver and did very well until he got injured for several weeks. He came back towards the end of my run, and put up decent numbers. Not quite the spectacular year everyone was expecting from him, but decent nonetheless.
  • Darrell Jackson: (WR) Up until last week, he's been one of the steadiest players on the team, consistently putting up high fantasy numbers. Then he got injured and didn't play last week. I started one of his backups, Nate Burleson, but he didn't do anything. Darn.
  • Jason Witten: (TE) Tight Ends don't normally put up big numbers, and Witten was no exception. Still, i was expecting more than a single touchdown from the guy. A few years ago he damn near put up a thousand yards with 6 touchdowns (and he had a ver respectible season last year too). No one ever counts on their tight ends, really, and Witten didn't do that bad, but still.
  • Jeff Wilkins: (K) Early in the season, this guy was putting up huge numbers. Huge. This is, of course, absurd for a kicker, and it didn't last. Still, he did better than anyone would ever have expected.
  • San Diego: (D/ST) The SD defence was quite good this year, and netted me a fair amount of points, considering that I drafted them pretty late in the draft. I started the season with Denver, but SD consistently outscored them, so SD got the call for most of the season, and did a good job
  • Miscellaneous: I picked up Brandon Jacobs off the waiver wire and had him filling in for a few weeks during some of the injury-laden times. He makes a surprisingly decent third fantasy back because even though he doesn't get a lot of touches, he gets them where they count: the goalline. Tiki Barber owners must be furious (this is another example of fantasy distorting reality). Kevan Barlow had a similar (but much less consistent) situation going in New Jersey, but pretty much rode the bench for me all year long. Keyshawn Johnson and Isaac Bruce both put up consistent (relatively low, but still decent) numbers and made some appearances at the flex position throughout the year, but neither really did a ton for me. I picked up Tony Romo towards the end of the season, and pretty much regretted not starting him every week (especially the week he threw for 5 touchdowns). But still, how do you start a young, unproven punk like Romo over someone like Brady?
All in all, it was a decent year, even if they did peak a little early and get injured a little too often. I've made it to the playoffs in two of the last three years (and the one year I didn't was due to an uncharacteristic bad draft pick). This is actually not half bad for someone who doesn't pay attention to sports!
Posted by Mark on December 19, 2006 at 08:48 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, December 17, 2006

Just Do It
In Paul Graham's essay Made in USA, he writes about America's tendencies towards design.
Americans are good at some things and bad at others. We're good at making movies and software, and bad at making cars and cities. And I think we may be good at what we're good at for the same reason we're bad at what we're bad at. We're impatient. In America, if you want to do something, you don't worry that it might come out badly, or upset delicate social balances, or that people might think you're getting above yourself. If you want to do something, as Nike says, just do it.
It's amazing how well the "Just Do It" marketing line fits America (the only other tagline that works as well is EA Sports' "If it's in the game, it's in the game" line), and Graham is certainly right about how that affects programmers. I've noticed that there are really two different types of programmers: people who look stuff up, and people who just try it to see if it works. People ask me questions about HTML or CSS all the time. Sometimes I know the answer, sometimes I dont, but most of the time my response is "Have you tried it to see what happens?" HTML is pretty simple, and it's easy to test out various concepts. There's no reason not to, and I'll also note that trying it is also the best way to learn. I'm reminded of this design parable about a ceramics class:
The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the "quantity" group: fifty pound of pots rated an "A", forty pounds a "B", and so on. Those being graded on "quality", however, needed to produce only one pot -albeit a perfect one - to get an "A". Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the "quantity" group was busily churning out piles of work - and learning from their mistakes - the "quality" group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.
There are several interesting things about this. First, as Graham notes in his essay, good craftsmanship means working fast and iterating your design. Second, failure isn't a bad thing in this story. In fact, failure is a necessary component of success. In such a scenario, people who work fast and iterate do much better than people who meticulously plan their designs. As Graham belabors in his essay, this works for some things, not not others.

Of course, not all American designs are bad, and Graham mentions the obvious exception:
Apple is an interesting counterexample to the general American trend. If you want to buy a nice CD player, you'll probably buy a Japanese one. But if you want to buy an MP3 player, you'll probably buy an iPod. What happened? Why doesn't Sony dominate MP3 players?
It's because Apple is obsessed with good design ("Or more precisely, their CEO is.") Interestingly, I think one of the reasons the iPod is so successful is that Apple understands the paradox of choice really well. The iPod isn't and has never really been the leader in terms of features or functionality. But it does what it does extremely well, and I think that's partly because the iPod is actually quite simple. If you loaded it up with all sorts of extra features, there's no way you'd be able to keep the simplicity of the interface, and that would make it harder to use, and much less attactive.

In the end, I don't know that I agree with everything in Graham's essay, but his stuff is always worth reading.
Posted by Mark on December 17, 2006 at 07:41 PM .: Comments (4) | link :.


End of This Day's Posts

Sunday, October 22, 2006

The Paradox of Choice
At the UI11 Conference I attended last week, one of the keynote presentations was made by Barry Schwartz, author of The Paradox of Choice: Why More Is Less. Though he believes choice to be a good thing, his presentation focused more on the negative aspects of offering too many choices. He walks through a number of examples that illustrate the problems with our "official syllogism" which is:
  • More freedom means more welfare
  • More choice means more freedom
  • Therefore, more choice means more welfare
In the United States, we have operated as if this syllogism is unambigiously true, and as a result, we're deluged with choices. Just take a look at a relatively small supermarket: there are 285 cookies, 75 iced teas, 275 cereals, 40 toothpastes, 230 soups, and 175 salad dressings (not including 12 extra virgin olive oils and 18 vinegars which could be combined to make hundreds of vinaigrettes) to choose from (and this was supposedly a smaller supermarket). At your typical Circuit City, the sheer breadth of stereo components allows you to create any one of 6.5 million possible stereo systems. And this applies all throughout our lives, extending even to working, marriage, and whether or not to have children. In the past, these things weren't much of a question. Today, everything is a choice. [thanks to Jesper R�nn-Jensen for his notes on Schwartz's talk - it's even got pictures!]

So how do we react to all these choices? Luke Wroblewski provides an excellent summary, which I will partly steal (because, hey, he's stealing from Schwartz after all):
  • Paralysis: When faced with so many choices, people are often overwhelmed and put off the decision. I often find myself in such a situation: Oh, I don't have time to evaluate all of these options, I'll just do it tomorrow. But, of course, tomorrow is usually not so different than today, so you see a lot of procrastination.
  • Decision Quality: Of course, you can't procrastinate forever, so when forced to make a decision, people will often use simple heuristics to evaluate the field of options. In retail, this often boils down to evaluation based mostly on Brand and Price. I also read a recent paper on feature fatigue (full article not available, but the abstract is there) that fits nicely here.

    In fields where there are many competing products, you see a lot of feature bloat. Loading a product with all sorts of bells and whistles will differentiate that product and often increase initial sales. However, all of these additional capabilities come at the expense of usability. What's more, even when people know this, they still choose high-feature models. The only thing that really helps is when someone actually uses a product for a certain amount of time, at which point they realize that they either don't use the extra features or that the tradeoffs in terms of usability make the additional capabilities considerably less attractive. Part of the problem is perhaps that usability is an intangible and somewhat subjective attribute of a product. Intellectually, everyone knows that it is important, but when it comes down to decision-time, most people base their decisions on something that is more easily measured, like number of features, brand, or price. This is also part of why focus groups are so bad at measuring usability. I've been to a number of focus groups that start with a series of exercises in front of a computer, then end with a roundtable discussion about their experiences. Usually, the discussion was completely at odds with what the people actually did when in front of the computer. Watch what they do, not what they say...
  • Decision Satisfaction: When presented with a lot of choices, people may actually do better for themselves, yet they often feel worse due to regret or anticipated regret. Because people resort to simplifying their decision making process, and because they know they're simplifying, they might also wonder if one or more of the options they cut was actually better than what they chose. A little while ago, I bought a new cell phone. I actually did a fair amount of work evaluating the options, and I ended up going with a low-end no-frills phone... and instantly regretted it. Of course, the phone itself wasn't that bad (and for all I know, it was better than the other phones I passesd over), but I regret dismissing some of the other options, such as the camera (how many times over the past two years have I wanted to take a picture and thought Hey, if I had a camera on my phone I could have taken that picture!)
  • Escalation of expectations: When we have so many choices and we do so much work evaluating all the options, we begin to expect more. When things were worse (i.e. when there were less choices), it was much easier to exceed expectations. In the cell phone example above, part of the regret was no doubt fueled by the fact that I spent a lot of time figuring out which phone to get.
  • Maximizer Impact: There are some people who always want to have the best, and the problems inherent in too many choices hit these people the hardest.
  • Leakage: The conditions present when you're making a decision exert influence long after the decision has actually been made, contributing to the dissatisfaction (i.e. regret, anticipated regret) and escalation of expectations outlined above.
As I was watching this presentation, I couldn't help but think of various examples in my own life that illustrated some of the issues. There was the cell phone choice which turned out badly, but I also thought about things I had chosen that had come out well. For example, about a year ago, I bought an iPod, and I've been extremely happy with it (even though it's not perfect), despite the fact that there were many options which I considered. Why didn't the process of evaluating all the options evoke a feeling of regret? Because my initial impulse was to purchase the iPod, and I looked at the other options simply out of curiosity. I also had the opportunity to try out some of the players, and that experience helped enormously. And finally, the one feature that had given me pause was video (which wasn't available on the iPod when I started looking around). The Cowon iAudio X5 was giving me pause because it had video capabilities and the iPod at the time didn't. As it turned out, about a week later the Video iPod was released and made my decision very easy. I got that and haven't looked back since. The funny thing is that since I've gotten that iPod, I haven't used the video feature for anything useful. Not even once.

Another example is my old PC which has recently kicked the bucket. I actually assembled that PC from a bunch of parts, rather than going through a mainstream company like Dell, and the number of components available would probably make the Circuit City stereo example I gave earlier look tiny by comparison. Interestingly, this diversity of choices for PCs is often credited as part of the reason PCs overtook Macs:
Back in the early days of Macintoshes, Apple engineers would reportedly get into arguments with Steve Jobs about creating ports to allow people to add RAM to their Macs. The engineers thought it would be a good idea; Jobs said no, because he didn't want anyone opening up a Mac. He'd rather they just throw out their Mac when they needed new RAM, and buy a new one.

Of course, we know who won this battle. The "Wintel" PC won: The computer that let anyone throw in a new component, new RAM, or a new peripheral when they wanted their computer to do something new. Okay, Mac fans, I know, I know: PCs also "won" unfairly because Bill Gates abused his monopoly with Windows. Fair enough.

But the fact is, as Hill notes, PCs never aimed at being perfect, pristine boxes like Macintoshes. They settled for being "good enough" -- under the assumption that it was up to the users to tweak or adjust the PC if they needed it to do something else.
But as Schwartz would note, the amount of choices in assembling your own computer can be stifling. This is why computer and software companies like Microsoft, Dell, and Apple (yes, even Apple) insist on mediating the user's experience with their hardware by limiting access (i.e. by limiting choice). This turns out to be not so bad, because the number of things to consider really is staggering. So why was I so happy with my computer? Because I really didn't make many of the decisions - I simply went over to Ars Technica's System Guide and used their recommendations. When it comes time to build my next computer, what do you think I'm going to do? Indeed, Ars is currently compiling recommendations for their October system guide, due out sometime this week. My new computer will most likely be based off of their "Hot Rod" box. (Linux presents some interesting issues in this context as well, though I think I'll save that for another post.)

So what are the lessons here? One of the big ones is to separate the analysis from the choice by getting recommendations from someone else (see the Ars Technica example above). In the market for a digital camera? Call a friend (preferably one who is into photography) and ask them what to get. Another thing that strikes me is that just knowing about this can help you overcome it to a degree. Try to keep your expectations in check, and you might open up some room for pleasant surprises (doing this is suprisingly effective with movies). If possible, try using the product first (borrow a friend's, use a rental, etc...). Don't try to maximize the results so much; settle for things that are good enough (this is what Schwartz calls satisficing).

Without choices, life is miserable. When options are added, welfare is increased. Choice is a good thing. But too much choice causes the curve to level out and eventually start moving in the other direction. It becomes a matter of tradeoffs. Regular readers of this blog know what's coming: We don't so much solve problems as we trade one set of problems for another, in the hopes that the new set of problems is more favorable than the old. So where is the sweet spot? That's probably a topic for another post, but my initial thoughts are that it would depend heavily on what you're doing and the context in which you're doing it. Also, if you were to take a wider view of things, there's something to be said for maximizing options and then narrowing the field (a la the free market). Still, the concept of choice as a double edged sword should not be all that surprising... after all, freedom isn't easy. Just ask Spider Man.
Posted by Mark on October 22, 2006 at 10:56 AM .: Comments (2) | link :.


End of This Day's Posts

Sunday, October 15, 2006

Link Dump
I've been quite busy lately so once again it's time to unleash the chain-smoking monkey research squad and share the results:
  • The Truth About Overselling!: Ever wonder how web hosting companies can offer obscene amounts of storage and bandwidth these days? It turns out that these web hosting companies are offering more than they actually have. Josh Jones of Dreamhost explains why this practice is popular and how they can get away with it (short answer - most people emphatically don't use or need that much bandwidth).
  • Utterly fascinating pseudo-mystery on Metafilter. Someone got curious about a strange flash advertisement, and a whole slew of people started investigating, analyzing the flash file, plotting stuff on a map, etc... Reminded me a little of that whole Publius Enigma thing [via Chizumatic].
  • Weak security in our daily lives: "Right now, I am going to give you a sequence of minimal length that, when you enter it into a car's numeric keypad, is guaranteed to unlock the doors of said car. It is exactly 3129 keypresses long, which should take you around 20 minutes to go through." [via Schneier]
  • America's Most Fonted: The 7 Worst Fonts: Fonts aren't usually a topic of discussion here, but I thought it was funny that the Kaedrin logo (see upper left hand side of this page) uses the #7 worst font. But it's only the logo and that's ok... right? RIGHT?
  • Architecture is another topic rarely discussed here, but I thought that the new trend of secret rooms was interesting. [via Kottke]
That's all for now. Things appear to be slowing down, so that will hopefully mean more time for blogging (i.e. less link dumpy type posts).
Posted by Mark on October 15, 2006 at 11:09 PM .: link :.


End of This Day's Posts

Thursday, September 21, 2006

Gather Intelligence to Be Effective in Interviews, Bounty Hunting
Through following a trail of links long enough that I don't remember where I started, I stumbled upon a post about interviewing. In itself, this is unremarkable. However, at the time, I happened to be watching an episode of Firefly (well, I had it on in the background). Because I am a nerd, I also had the commentary track on, and just as I read about the interviewing anecdote, Joss Whedon (writer/creator of Firefly) began relating something that eerily paralleled the interviewing "secret" in the post referenced above.

The "secret" is to know those who are interviewing you, and tailor your answers to match the type of response the person is looking for. He tells the story of how he interviewed for a principalship at a school in his district, or rather, how a friend helped him prepare:
She drew a rectangle on a piece of paper. “This is the table,” she said. She began to draw small circles around the table — 10 of them. She named each circle. She identified them as the people who would be interviewing me. This was not secret information, this was the panel that every potential principal had to face. The SECRET came next. She pointed to the first circle, “This is John Williams (not his real name). John tends to ask many data related questions. He likes brevity. Keep your answers short to him. Make your point and be quiet.” She pointed to the next circle. “This is Mary Thomas, she’s very child-oriented. She’s very warm and friendly and loves to talk. Answer her questions and orient your answers to how children are affected. Talk a lot with her; elaborate all your points. She’s warm and fuzzy, so use many personal anecdotes.” She continued around the table and when finished, it was like I had the playbook of an opposing football team. I knew the type of questions they would ask. I learned the type of answer each interviewer liked to hear.
This is interesting and, naturally, the advice is not limited to interviewing. (Those that have not seen Firefly but want to might want to bug out here, as Spoilers are ahead). Take Jubal Early. He's a bounty hunter, and he's after one of the people on Serenity. To get to her, he has to make sure the rest of the crew does not get in his way. So before he starts, he listens in on some conversations on the ship, gathering intelligence. As Whedon notes in the commentary:
Early has a very specific way of dealing with every character on the ship. He has listened to their conversation, so he understands he knows enough about them. And he understands that when you're with Mal, you have to take him out instantly because Mal is a physical threat that is very real. And then, you know, he closes up Jayne and Zoe and all the threats ... Kaylee is someone he approaches a different way - through a very horrible form of sexual intimidation. ... Later on we'll see him dealing with Book. And we'll see him dealing with Simon. When he deals with Book, again this guy has to be taken out. which gives us a little insight into Book's character. ... And of course, he deals with Simon with logic, because he realizes that the best way to deal with Simon is to use logic because that's the kind of person he is.
For those who haven't seen the series, some of this might not make sense, but each approach does fit its target. Mal is the captain and he won't stand for an outsider's shenanigans, especially when that outsider threatens the crew. Jayne and Zoe are also physical threats. Kaylee is like a delightful pixie, which makes Early's approach particularly disturbing. Shepherd Book is a priest, though events like the one in this episode indicate that Book has a less than saintly past. Simon is a doctor, and he's very proper, so a logical approach fits him well.

Again, this advice isn't limited to interviewing and bounty hunting. Knowing who you're dealing with is important, and allows you to orient your responses to their expectations. A little while ago, I was promoted to a management position. One of the interesting changes for me is that I'm dealing with a much wider variety of people, and thus I have to modulate my message depending on who I'm talking to. Of course knowing this and doing this are two different things, and I'm certainly no expert when it comes to this stuff. It comes naturally to some people, but not especially to me.

Anyway, not something I expected to write, but the coincidece struck me...
Posted by Mark on September 21, 2006 at 08:55 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, September 17, 2006

Magic Design
A few weeks ago, I wrote about magic and how subconscious problem solving can sometimes seem magical:
When confronted with a particularly daunting problem, I'll work on it very intensely for a while. However, I find that it's best to stop after a bit and let the problem percolate in the back of my mind while I do completely unrelated things. Sometimes, the answer will just come to me, often at the strangest times. Occasionally, this entire process will happen without my intending it, but sometimes I'm deliberately trying to harness this subconscious problem solving ability. And I don't think I'm doing anything special here; I think everyone has these sort of Eureka! moments from time to time. ...

Once I noticed this, I began seeing similar patterns throughout my life and even history.
And indeed, Jason Kottke recently posted about how design works, referencing a couple of other designers, including Michael Bierut of Design Observer, who describes his process like this:
When I do a design project, I begin by listening carefully to you as you talk about your problem and read whatever background material I can find that relates to the issues you face. If you’re lucky, I have also accidentally acquired some firsthand experience with your situation. Somewhere along the way an idea for the design pops into my head from out of the blue. I can’t really explain that part; it’s like magic. Sometimes it even happens before you have a chance to tell me that much about your problem!
[emphasis mine] It is like magic, but as Bierut notes, this sort of thing is becoming more important as we move from an industrial economy to an information economy. He references a book about managing artists:
At the outset, the writers acknowledge that the nature of work is changing in the 21st century, characterizing it as "a shift from an industrial economy to an information economy, from physical work to knowledge work." In trying to understand how this new kind of work can be managed, they propose a model based not on industrial production, but on the collaborative arts, specifically theater.

... They are careful to identify the defining characteristics of this kind of work: allowing solutions to emerge in a process of iteration, rather than trying to get everything right the first time; accepting the lack of control in the process, and letting the improvisation engendered by uncertainty help drive the process; and creating a work environment that sets clear enough limits that people can play securely within them.
This is very interesting and dovetails nicely with several topics covered on this blog. Harnessing self-organizing forces to produce emergent results seems to be rising in importance significantly as we proceed towards an information based economy. As noted, collaboration is key. Older business models seem to focus on a more brute force way of solving problems, but as we proceed we need to find better and faster ways to collaborate. The internet, with it's hyperlinked structure and massive data stores, has been struggling with a data analysis problem since its inception. Only recently have we really begun to figure out ways to harness the collective intelligence of the internet and its users, but even now, we're only scraping the tip of the iceberg. Collaborative projects like Wikipedia or wisdom-of-crowds aggregators like Digg or Reddit represent an interesting step in the right direction. The challenge here is that we're not facing the problems directly anmore. If you want to create a comprehensive encyclopedia, you can hire a bunch of people to research, write, and edit entries. Wikipedia tried something different. They didn't explicitely create an encyclopedia, they created (or, at least, they deployed) a system that made it easy for large amount of people to collaborate on a large amount of topics. The encyclopedia is an emergent result of that collaboration. They sidestepped the problem, and as a result, they have a much larger and dynamic information resource.

None of those examples are perfect, of course, but the more I think about it, the more I think that their imperfection is what makes them work. As noted above, you're probably much better off releasing a site that is imperfect and iterating, making changes and learning from your mistakes as you go. When dealing with these complex problems, you're not going to design the perfect system all at once. I realize that I keep saying we need better information aggregation and analysis tools, and that we have these tools, but they leave something to be desired. The point of these systems, though, is that they get better with time. Many older information analysis systems break when you increase the workload quickly. They don't scale well. These newer systems only really work well once they have high participation rates and large amounts of data.

It remains to be seen whether or not these systems can actually handle that much data (and participation), but like I said, they're a good start and they're getting better with time.
Posted by Mark on September 17, 2006 at 08:01 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, September 03, 2006

Does Magic Exist?
I'm back from my trip and it appears that the guest posting has fallen through. So a quick discussion on magic, which was brought up by a friend on a discussion board I frequent. The question: Does magic exist?

I suppose this depends on how you define magic. Arthur C. Clarke once infamously said that "Any sufficiently advanced technology is indistinguishable from magic." And that's probably true, right? If some guy can bend spoons with his thoughts, there's probably a rational explanation for it... we just haven't figured it out yet. Does it count as magic if we don't know how he's doing it? What about when we do figure out how he's doing it? What if it really was some sort of empirically observable telekinesis?

After all, magicians have been performing for hundreds of years, relying on slight of hand and misdirection1 (amongst other tricks of the trade). However, I suspect that's not the type of answer that's being sought.

One thing I think is interesting is the power of thought and how many religious and "magical" traditions were really just ways to harness thought in a productive fashion. For example, crystal balls are often considered to be a magical way to see the future. While not strictly true, it was found that those who look into crystal balls for a long period of time end up entering a sort of trance, similar to hypnosis, and the human mind is able to make certain connections it would not normally make2. Can such a person see the future? I doubt it, but I don't doubt that such people often experience a "revelation" of sorts, even if it is sometimes misguided.

However, you see something similar, though a lot more controlled and a lot less hokey, in a lot of religious traditions. For instance, take Christian Mass and prayer. Mass offers a number of repetitive aspects like singing combined with several chances for reflection and thought. I've always found that going to mass was very helpful in that it put things in a whole new perspective. Superficial things that worried me suddenly seemed less important and much more approachable. Repetitive rituals (like singing in Church) often bring back powerful feelings of the past, etc... further reinforcing the reflection from a different perspective.

Taking it completely out of the spiritual realm, I see very rational people doing the same thing all the time. They just aren't using the same vocabulary. When confronted with a particularly daunting problem, I'll work on it very intensely for a while. However, I find that it's best to stop after a bit and let the problem percolate in the back of my mind while I do completely unrelated things. Sometimes, the answer will just come to me, often at the strangest times. Occasionally, this entire process will happen without my intending it, but sometimes I'm deliberately trying to harness this subconscious problem solving ability. And I don't think I'm doing anything special here; I think everyone has these sort of Eureka! moments from time to time. Once you remove the theology from it, prayer is really a similar process.

Once I noticed this, I began seeing similar patterns throughout my life and even history. For example, Archimedes. He was tasked with determining whether a given substance was gold or not (at the time, this was a true challenge). He toiled and slaved at the problem for weeks, pushing all other aspects of his life away. Finally, his wife, sick of her husband's dirty appearance and bad odor, made him take a bath. As he stepped into the tub, he noticed the water rising and had a revelation... this displacement could be used to accurately measure volume, which could then be used to determine density and ultimately whether or not a substance was gold. The moral of the story: Listen to your wife!3

Have I actually answered the question? Well, I may have veered off track a bit, but I find the process of thinking to be interesting and quite mysterious. After all, whatever it is that's going on in our noggins isn't understood very well. It might just be indistinguishable from magic...

1 - Note to self: go see The Illusionist! Also, The Prestige looks darn good. Why does Hollywood always produce these things in pairs? At least it looks like there's good talent involved in each of these productions...

2 - Oddly enough, I discoved this nugget on another trip through the library stacks while I was supposed to be studying in college. Just thought I should call that out in light of recent posting...

3 - Yes, this is an anecdote from the movie Pi.
Posted by Mark on September 03, 2006 at 11:58 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, June 25, 2006

Art for the computer age...
I was originally planning on doing a movie review while our gentle web-master is away, but a topic has come up too many times in the past few weeks for me not to write about it. First it came up in the tag map of Kaedrin, when I noticed that some people were writing pages just to create appealing tag-maps. Then it came up in Illinois and Louisiana. They've passed laws regulating the sale and distribution of "violent games" to minors. This, of course, has led to lawsuits and claims that the law violates free speech. After that, it was the guys at Penny Arcade. They posted links to We Feel Fine and Listening Post.. Those projects search the internet for blogs (maybe this one?) and pull text from them about feelings, and present those feelings to an audience in different ways. Very interesting. Finally, it came up when I opened up the July issue of Game Informer, and read Hideo Kojima's quote:
I believe that games are not art, and will never be art. Let me explain � games will only match their era, meaning what the people of that age want reflects the outcome of the game at that time. So, if you bring a game from 20 years ago out today, no one will say �wow.� There will be some essence where it�s fun, but there won�t be any wows or touching moments. Like a car, for example. If you bring a car from 20 years ago to the modern day, it will be appealing in a classic sense, but how much gasoline it uses, or the lack of air conditioning will simply not be appreciated in that era. So games will always be a kind of mass entertainment form rather than art. Of course, there will be artistic ways of representing games in that era, but it will still be entertainment. However, I believe that games can be a culture that represent their time. If it�s a light era, or a dark era, I always try to implement that era in my works. In the end, when we look back on the projects, we can say �Oh, it was that era.� So overall, when you look back, it becomes a culture.�
Every time I reread that quote, I cringe. Here's a man who is one of the most significant forces in video games today, the creator of Metal Gear, and he's saying "No, they're not art, and never will be." I find his distinction between mass entertaintment and art troubling, and his comparison to a car flawed.

It's true that games will always be a reflection of their times- just like anything else is. The limitations of the time and the attitudes of the culture at the time are going to have an effect on everything coming out of that time. A car made in the 60s is going to show the style of the 60s, and is going to have the tech of the 60s. That makes sense. Of course, a painting made in the 1700s is going to show the limits and is going to reflect the feelings of that time, too. The paints, brushes, and canvas used then aren't necessarily going to be the same as the ones used now, especially with the popular use of computers in painting. The fact that something is a reflection of the times isn't going to stop people from appreciating the artistic worth of that thing. The fact that the Egyptians hadn't mastered perspective doesn't stop anyone from wanting to see their statues.

What does that really tell us, though? Nothing. A car from the 80s may not be appreciated as much as a new model car as a means of transport, but Kojima seems to be completely forgetting that there are many cars that are appreciated as special. Nobody buys a 60s era muscle car because they think it's a good car for driving around in- they buy it because they think it's special, because some people view older cars as collectable. Some people do see them as more than a mere means of transportation. People are very much "wowed" by old cars. Is there any reason why this can't be true of games?

I am 8 Bit seems to suggest that there are people who are still wowed by those games. Kojima may be partially correct, though. Maybe most of those early games won't hold up in the long run. That shouldn't be a surprise. They're the first generation of games. The 8-Bit era was the begining of the new wave of games, though. For the first time, creators could start to tell real stories, beyond simple high-score pursuit. Game makers were just getting their wings, and starting to see what games were really capable of. Maybe early games aren't art. Does that mean that games aren't art?

The problem mostly seems to be that we're asking the wrong questions. We shouldn't be asking "are video games art" any more than we'd ask "are movies art." It's a loaded question and you'll never come to any real answer, because the answer is going to depend completely on what movie you're looking at, and who you're asking. The same holds true with games. The question shouldn't be whether all games are art, but whether a particular game has some artistic merrit. How we decide what counts as art is constantly up for debate, but there are games that raise such significant moral or philosophical questions, or have such an amazing sense of style, or tell such an amazing story, that it seems hard to argue that they have no artistic merrit.

All of this really is leading somewhere. Computers have changed everything. I know that seems obvious, but I think it's taking some people- people like Kojima- a little longer to realize it. Computers have opened up a level of interactivity and access to information that we've never really had before. I can update Kaedrin from Michigan, and can send a message to a friend in Germany, all while buying videos from Japan and playing chess with a man in Alaska (not that I'm actually doing those things... but I could). These changes are going to be reflected in the art our culture produces. There's going to be backlash and criticism, and we're going to find that some people just don't "get it" or don't want to. We've gone through the same thing countless times before. Nobody thought movies would be seen as art when they came on the scene, and they were sure that the talkies wouldn't. When Andy Warhol came out, there were plenty of nay-sayers. Soup cans? As art? Computers have generally been accepted as a tool for making art, but I think we're still seeing the limits pushed. We've barely scratched the surface. The interaction between art, artist, and viewer is blurring, and I, for one, can't wait to see what happens.
Posted by Samael on June 25, 2006 at 01:42 PM .: Comments (4) | link :.


End of This Day's Posts

Sunday, June 18, 2006

Novelty
David Wong's article on the coming video game crash seems to have inspired Steven Den Beste, who agrees with Wong that there will be a gaming crash and also thinks that the same problems affect other forms of entertainment. The crux of the problem appears to be novelty. Part of the problem appears to be evolutionary as well. As humans, we are conditioned for certain things, and it seems that two of our insticts are conflicting.

The first instinct is the human tendency to rely on induction. Correlation does not imply causation, but most of the time, we act like it does. We develop a complex set of heuristics and guidelines that we have extrapolated from past experiences. We do so because circumstances require us to make all sorts of decisions without posessing the knowledge or understanding necessary to provide a correct answer. Induction allows us to to operate in situations which we do not uderstand. Psychologist B. F. Skinner famously explored and exploited this trait in his experiments. Den Beste notes this in his post:
What you do is to reward the animal (usually by giving it a small amount of food) for progressively behaving in ways which is closer to what you want. The reason Skinner studied it was because he (correctly) thought he was empirically studying the way that higher thought in animals worked. Basically, they're wired to believe that "correlation often implies causation". Which is true, by the way. So when an animal does something and gets a reward it likes (e.g. food) it will try it again, and maybe try it a little bit differently just to see if that might increase the chance or quantity of the reward.
So we're hard wired to create these heuristics. This has many implications, from Cargo Cults to Superstition and Security Beliefs.

The second instinct is the human drive to seek novelty, also noted by Den Beste:
The problem is that humans are wired to seek novelty. I think it's a result of our dietary needs. Lions can eat zebra meat exclusively their entire lives without trouble; zebras can eat grass exclusively their entire lives. They don't need novelty, but we do. Primates require a quite varied diet in order to stay healthy, and if we eat the same thing meal after meal we'll get sick. Individuals who became restless and bored with such a diet, and who sought out other things to eat, were more likely to survive. And when you found something new, you were probably deficient in something that it provided nutritionally, so it made sense to like it for a while -- until boredom set in, and you again sought out something new.
The drive for diversity affects more than just our diet. Genetic diversity has been shown to impart broader immunity to disease. Children from diverse parentage tend to develop a blend of each parent's defenses (this has other implications, particularly for the tendency for human beings to work together in groups). The biological benefits of diversity are not limited to humans either. Hybrid strains of many crops have been developed over the years because by selectively mixing the best crops to replant the next year, farmers were promoting the best qualities in the species. The simple act of crossing different strains resulted in higher yields and stronger plants.

The problem here is that evolution has made the biological need for diversity and novelty dependent on our inductive reasoning instincts. As such, what we find is that those we rely upon for new entertainment, like Hollywood or the video game industry, are constantly trying to find a simple formula for a big hit.
It's hard to come up with something completely new. It's scary to even make the attempt. If you get it wrong you can flush amazingly large amounts of money down the drain. It's a long-shot gamble. Every once in a while something new comes along, when someone takes that risk, and the audience gets interested...
Indeed, the majority of big films made today appear to be remakes, sequels or adaptations. One interesting thing I've noticed is that something new and exciting often fails at the box office. Such films usually gain a following on video or television though. Sometimes this is difficult to believe. For instance, The Shawshank Redemption is a very popular film. In fact, it occupies the #2 spot (just behind The Godfather) on IMDB's top rated films. And yet, the film only made $28 million dollars (ranked 52 in 1994) in theaters. To be sure, that's not a modest chunk of change, but given the universal love for this film, you'd expect that number to be much higher. I think part of the reason this movie failed at the box office was that marketers are just as susceptible to these novelty problems as everyone else. I mean, how do you market a period prison drama that has an awkward title an no big stars? It doesn't sound like a movie that would be popular, even though everyone seems to love it.

Which brings up another point. Not only is it difficult to create novelty, it can also be difficult to find novelty. This is the crux of the problem: we require novelty, but we're programmed to seek out new things via correllation. There is no place to go for perfect recommendations and novelty for the sake of novelty isn't necessarily enjoyable. I can seek out some bizarre musical style and listen to it, but the simple fact that it is novel does not guarantee that it will be enjoyable. I can't rely upon how a film is marketed because that is often misleading or, at least, not really representative of the movie (or whatever). Once we do find something we like, our instinct is often to exhaust that author or director or artist's catalog. Usually, by the end of that process, the artist's work begins to seem a little stale, for obvious reasons.

Seeking out something that is both novel and enjoyable is more difficult than it sounds. It can even be a little scary. Many times, things we think will be new actually turn out to be retreads. Other times, something may actually be novel, but unenjoyable. This leads to another phenomenon that Den Beste mentions: the "Unwatched pile." Den Beste is talking about Anime, and at this point, he's begun to accumulate a bunch of anime DVDs which he's bought but never watched. I've had similar things happen with books and movies. In fact, I have several books on my shelf, just waiting to be read, but for some of them, I'm not sure I'm willing to put in the time and effort to read them. Why? Because, for whatever reason, I've begun to experience some set of diminishing returns when it comes to certain types of books. These are similar to other books I've read, and thus I probably won't enjoy these as much (even if they are good books).

The problem is that we know something novel is out there, it's just a matter of finding it. At this point, I've gotten sick of most of the mass consumption entertainment, and have moved on to more niche forms of entertainment. This is really a signal versus noise, traversal of the long tail problem. An analysis problem. What's more, with globalization and the internet, the world is getting smaller... access to new forms of entertainment are popping up (for example, here in the US, anime was around 20 years ago, but it was nowhere near as common as it is today). This is essentially a subset of a larger information aggregation and analysis problem that we're facing. We're adrift in a sea of information, and must find better ways to navigate.
Posted by Mark on June 18, 2006 at 03:55 PM .: Comments (6) | link :.


End of This Day's Posts

Sunday, June 11, 2006

Link Dump
Time is short this week, so just a few links I found interesting...
  • Make Me Watch TV: Collaborative torture. This guy lets people choose what he watches on TV. Naturally, voters tend to make him watch the worst of the worst (though it seems that sometimes people are nice and let him watch an episode of Lost or Doctor Who). After each viewing, he blogs about what he's seen. One interesting thing here is that, if you want, you can "sponsor" a time slot: If you pay him $5 (per half hour), he'll let you override the popular vote and force him to watch the program of your choice. Democracy in action.
  • Life After the Video Game Crash: In light of recent bloggery, this article in which David Wong recaps the history of video games (including the beloved Atari 2600) also predicts the coming of another Video Game Crash. Basically, it argues that the next generation gaming consoles offer very little in the way of true innovation and Wong is betting that people will stay away in droves. Regardless of what you may think, it's worth reading because Wong is funny:
    And yet, even with the enormous number of games (Metroid delayed my discovering girls for a for a good 18 months), the gaming experience itself still couldn't keep our interest for more than a few years. Attention waned again, but this time new, fancier systems arrived just in time, offering a new and novel experience thanks to prettier graphics and character animation. And yet those systems (the Sega Genesis and later the SNES), as great as they were, eventually were retired to closets and attics and the sandy carpets of the Pakistani black market. It was a bitter, dark cloud of Japanese expletives that wafted from the meeting rooms at Nintendo and Sega when they realized their industry effectively lived under a curse.
  • The World's Most Important 6 Second Drum Beat: Nate Harrison's fascinating 2004 video explores the history of the "Amen Break," a six second drum beat from a b-side of a 1969 single that's been used extensively in early hiphop and sample-based music. From there, it spawned subcultures like drum-and-bass and jungle music. Aside from the strange fact that this is a video (there doesn't appear to actually be a reason for this - most of the video is simply a video of a record playing or a guy sitting in a room, for instance), this is compelling stuff. It covers the history of the break, but also some issues about ownership, copyright, and what constitutes art and creativity...
Apologies for the lameness of this entry. I've been travelling this weekend, and I'm exhausted. I've got several of these weekends coming up, so I'm going to try and set up some guest bloggers to post in my stead. I think the next one will be in two weeks or so. Anyway, I'll try to post again later this week...
Posted by Mark on June 11, 2006 at 09:05 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, April 16, 2006

Wikipedia Meme
Shamus stumbled upon an interesting meme (at Tim Worstall's blog) relying upon Wikipedia's ridiculously comprehensive date pages:
Go to Wikipedia and look up your birth day (excluding the year). List three neat facts, two births and one death in your blog, including the year.
Like Shamus, I won't limit myself to the numbers above and will instead just list some things I think are interesting about September 13...

Facts Births Deaths
Posted by Mark on April 16, 2006 at 05:54 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, April 09, 2006

Philadelphia Film Festival: Adult Swim 4 Your Lives
Well. That was interesting. Hosted by Dana Snyder (voice of Master Shake from Aqua Teen Hunger Force) and featuring a veritable plethora of other Adult Swim creators, Adult Swim 4 Your Lives was a show that defies any legitimate explanation. As such, I will simply list out some highlights, as well as some words that I would use to describe the night:
  • The Paul Green School of Rock kicked things off. Yes, Paul Green was the inspiration for Jack Black's character in the film The School of Rock.
  • Skeletor singing show tunes (notably the song Tomorrow from Annie)
  • In fact, lots of singing was happening tonight.
  • Burlesque.
  • Beethoven vs. Bach (featuring Camel Toe)
  • Evil Monkey Boy (and hula hoops).
  • Suggestive dancing.
  • Twirling tassels.
  • Preview of second season of Tom Goes to the Mayor and a new series, Minoriteam. I got a t-shirt!
  • Aqua Teen Hunger Force Feature Film (!?) preview.
  • Did I mention Burlesque?
  • Dana Snyder was either putting on his Master Shake voice all night, or that's really the way his voice sounds. Also, that man is crazy.
Basically the night was filled with Dana Snyder saying (usually singing) wacky stuff, followed by some sort of weird performance (usually featuring elements of the burlesque). It was quite a night, though from what I understand, last year's event went on much longer and was even crazier. Nevertheless, if you're a fan of Adult Swim and if such an event is ever going on near you, I'd recommend it. Unless the thought of watching Skeletor belt out a few show tunes turns your stomach. Then I'd suggest avoiding it.

Update 4.15.06: I've created a category for all posts from the Philadelphia Film Festival.
Posted by Mark on April 09, 2006 at 03:41 AM .: Comments (0) | link :.


End of This Day's Posts

Sunday, March 26, 2006

Introverts and a Curious Guy
Time is short this week, so here's a few interesting links:
  • Introverts of the World, Unite!: An interview with Jonathan Rauch, the author who wrote an article in the Atlantic called Caring for Your Introvert in which he perfectly characterized what it means to be an introvert. The reaction was overwhelming, and the article has drawn more traffic than any other piece on the Atlantic website. From personal experience, I can see that it not only struck a nerve with me, but with several friends (including several Kaedrin readers). Some good stuff in the interview:
    The Internet is the perfect medium for introverts. You could almost call it the Intronet. You know the old New Yorker cartoon with a dog sitting at a computer saying to another dog, "On the Internet, no one knows you're a dog." Well, on the Internet, no one knows you're an introvert. So it's kind of a natural that when The Atlantic put this piece online, introverts beat a path to it; it's the ideal distribution mechanism by which introverts can reach other introverts and spread the word.
    [emphasis mine] It is very true that the internet is great for introverts and I'd wager that a lot of bloggers and discussion board frequenters are more introverted than not.
  • Curious Guy: Malcolm Gladwell: Bill Simmons writes an awesome sports column for ESPN (it can be entertaining even for people who aren't big sports fans like myself), and every so often he e-mails questions "to somebody successful -- whether it's a baseball pitcher, an author, a creator of a TV show, another writer or whomever" and then he posts the results. A few weeks ago, he went back and forth with Malcolm Gladwell, leading to several interesting anecdotes, including this one which I found fascinating:
    There's a famous experiment done by a wonderful psychologist at Columbia University named Dan Goldstein. He goes to a class of American college students and asks them which city they think is bigger -- San Antonio or San Diego. The students are divided. Then he goes to an equivalent class of German college students and asks the same question. This time the class votes overwhelmingly for San Diego. The right answer? San Diego. So the Germans are smarter, at least on this question, than the American kids. But that's not because they know more about American geography. It's because they know less. They've never heard of San Antonio. But they've heard of San Diego and using only that rule of thumb, they figure San Diego must be bigger. The American students know way more. They know all about San Antonio. They know it's in Texas and that Texas is booming. They know it has a pro basketball team, so it must be a pretty big market. Some of them may have been in San Antonio and taken forever to drive from one side of town to another -- and that, and a thousand other stray facts about Texas and San Antonio, have the effect of muddling their judgment and preventing them from getting the right answer.
    Gladwell's got a new blog as well, and he posted a pointer to the Dan Goldstein research paper (pdf) as well as Goldstein's blog, where he comments on Gladwell's reference...
That's all for now...
Posted by Mark on March 26, 2006 at 07:39 PM .: Comments (2) | link :.


End of This Day's Posts

Wednesday, January 18, 2006

Neutral Emergence
On Sunday, I wrote about cheating in probabilistic systems, but one thing I left out was that these systems are actually neutral systems. A while ago, John Robb (quoting the Nicholas Carr post I referenced) put it well:
To people, "optimization" is a neutral term. The optimization of a complex mathematical, or economic, system may make things better for us, or it may make things worse. It may improve society, or degrade it. We may not be able to apprehend the ends, but that doesn't mean the ends are going to be good.
He's exactly right. Evolution and emergent intelligence doesn't naturally flow towards some eschatological goodness. It moves forward under its own logic. It often solves problems we don't want solved. For example, in global guerrilla open source warfare, this emergent community intelligence is slowly developing forms of attack (such as systems disruption), that make it an extremely effective foe for nation-states.
Like all advances in technology, the progress of self-organizing systems and emergent results can be used for good or for ill. In the infamous words of Buckethead:
Like the atom, the flyswatter can be a force for great good or great evil.
Indeed.
Posted by Mark on January 18, 2006 at 10:24 PM .: Comments (0) | link :.


End of This Day's Posts

Tuesday, January 17, 2006

Happy Birthday, Ben
Today is Ben Franklin's 300th birthday. In keeping with the theme of tradeoffs and compromise that often adorns this blog, and since Franklin himself has also been a common subject, here is a quote from Franklin's closing address to the Constitutional Convention in Philadelphia:
I confess that I do not entirely approve this Constitution at present; but sir, I am not sure I shall ever approve it: For, having lived long, I have experienced many instances of being obliged, by better information or fuller consideration, to change opinions even on important subjects, which I once thought right, but found to be otherwise. It is therefore that, the older I grow, the more apt I am to doubt my own judgment of others.

Most men, indeed, as well as most sects in religion, think themselves in possession of all truth, and that wherever others differ from them, it is so far error. ... But, though many private persons think almost as highly of their own infallibility as of that of their sect, few express it so naturally as a certain French lady, who, in a little dispute with her sister, said: "But I meet with nobody but myself that is always in the right."

In these sentiments, sir, I agree to this Constitution, with all its faults, - if they are such, - because I think a general government necessary for us... I doubt, too, whether any other convention we can obtain may be able to make a better Constitution; for, when you assemble a number of men, to have the advantage of their joint wisdom, you inevitably assemble with those men all their prejudices, their passions, their errors of opinion, their local interests, and their selfish views. From such an assembly can a perfect production be expected?

It therefore astonishes me, sir, to find this system approaching so near to perfection as it does; and I think it will astonish our enemies, who are waiting with confidence to hear that our counsels are confounded like those of the builders of Babel, and that our States are on the point of separation, only to meet hereafter for the purpose of cutting one another's throats. Thus I consent, sir, to this Constitution, because I expect no better, and because I am not sure that it is not the best.
There are some people today (and even in Franklin's time) who seem to think of compromise as some sort of fundamental evil, but it appears to me to be an essential part of democracy.

Update 1.18.06: Mister Snitch points to The Benjamin Franklin Tercentenary, an excellently designed site dedicated to Franklin's 300th birthday...
Posted by Mark on January 17, 2006 at 10:48 PM .: Comments (0) | link :.


End of This Day's Posts

Thursday, January 05, 2006

On the lighter side
You may be familiar with my long-winded, more serious style, but I thought this blond joke would be a welcome change of pace. Best. Joke. Evar. [via Chizumatic, whose lack of permalinks add extra irony]
Posted by Mark on January 05, 2006 at 12:53 AM .: Comments (1) | link :.


End of This Day's Posts

Saturday, December 24, 2005

Merry Christmas
Fry: "There's supposed to be some kind of, you know, pine tree."
Professor: "Pine trees have been extinct for eight hundred years, Fry. Gone the way of the poodle and your primitive notions of modesty." - (Listen to MP3)
In anticipation of the eventual extinction of Pine Trees, here's the traditional Kaedrin Christmas Cactus:

The Kaedrin Christmas Cactus

"Happy Christmas to all, and to all a good-night." (sound clip via Can't Get Enough Futurama)

Also regarding Christmas Trees, check out a post from a few years ago: Is the Christmas Tree Christian?
Posted by Mark on December 24, 2005 at 10:27 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, November 27, 2005

Hurricane Names, Restaurant Critics, and more...
Time is short this week, so here's a few links:
  • Hurricane Keyser Soze: What's in a name? Absolutely brilliant commentary on how the National Weather Service names their hurricanes.
    When Hurricane Isabel came ashore here a few years ago, I openly mocked it, and Isabel dropped a 250-year-old tree on my car. Now, was I aware on some level that the hurricane could do this? Sure I was. But I mocked anyway, and who could blame me? The only Isabel I ever knew was the moody, vaguely goth younger sister of a high school friend. Could she occasionally annoy? Sure. Did she prompt the odd argument? No doubt. Were there times that Isabel was irrational? Of course. But drop a tree on my car? Sorry, no sir.

    We want to fear these storms. We really do. But I'll be damned if I run from Hurricane Florence. I already have had the experience of being in a mandatory evacuation over a Hurricane named Bob. I didn't want to evacuate. I felt like a grade-A pussy running from someone named Bob. I still feel that way.

    So, is it any wonder that thousands of people stayed in harm's way, determined to ride out Katrina? Of course it isn't. ... What we need is a hurricane named, let's say, The Penetrator. You tell me that The Penetrator is coming ashore in 24 hours and I am gone like Keyser Soze. Use the names of famous human predators, like Adolph or Idi Amin or Attilla or Affleck, and people will break out in a mad dash for higher ground.
    Brilliant. [via Ministry of Minor Perfidy]
  • The Secret Life of a Restaurant Critic: The Restaurant Critic for the Boston Globe explains her job in surprisingly interesting detail.
  • Speed Demos Archive: This is why I love the internet. It's just full of people like this who have way too much time on their hands. These guys have compiled a list of their speed runs - attempts to win a game in as short a time as possible. They've got videos of each one. Just in case you wanted to watch someone defeat Metroid in 18 minutes.
That's all for now...
Posted by Mark on November 27, 2005 at 10:27 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, October 09, 2005

Link Dump
Not much time this week, so here are some interesting links:
  • A little while ago, I wrote about two software projects, one successful, one not. David Foster, who initially pointed me towards this story, has a follow up on one of the projects. The FBI's failure to develop a "Virtual Case File" system was bad enough, and now they're denying Freedom Of Information Act requests made by folks why are trying to figure out just went wrong with the project. My initial reaction was that the project failed due to a lack of discipline. And now the failure deepens as the FBI seeks to deny accountability in the project.
  • The Physics of ET Civilizations by Michio Kaku: An interesting take on what constitutes a truly advanced civilization. He claims that we should rank civilizations by their energy consumption:
    In a seminal paper published in 1964 in the Journal of Soviet Astronomy, Russian astrophysicist Nicolai Kardashev theorized that advanced civilizations must therefore be grouped according to three types: Type I, II, and III, which have mastered planetary, stellar and galactic forms of energy, respectively. He calculated that the energy consumption of these three types of civilization would be separated by a factor of many billions.
    We're currently living in a Type 0 civilization, but we're moving quickly towards a Type 1 civilization. How long after that could we reach Type 2?
  • 10 Ways To Create Content For Your Weblog: Ostensibly written to help us overcome bloggers block, but with such earth shattering advice as "Read, Listen To, or Watch the News" and "read your favorite blogs with the purpose of finding ideas to write about," I can't say as though it's that big of a help. I'll admit I've been in a bit of a funk lately, but this has more to do with a lack of time and energy than ideas. Until I get the time and motivation to write more, lists of links like this seem to be the order of the day.
  • Caring for Your Introvert by Jonathan Rauch: This is old, but it's a great article explaining a much understood group: introverts:
    What is introversion? ... Introverts are not necessarily shy. Shy people are anxious or frightened or self-excoriating in social settings; introverts generally are not. Introverts are also not misanthropic, though some of us do go along with Sartre as far as to say "Hell is other people at breakfast." Rather, introverts are people who find other people tiring.

    Extroverts are energized by people, and wilt or fade when alone. They often seem bored by themselves, in both senses of the expression. Leave an extrovert alone for two minutes and he will reach for his cell phone. In contrast, after an hour or two of being socially "on," we introverts need to turn off and recharge. My own formula is roughly two hours alone for every hour of socializing. This isn't antisocial. It isn't a sign of depression. It does not call for medication. For introverts, to be alone with our thoughts is as restorative as sleeping, as nourishing as eating.
That's all for now.
Posted by Mark on October 09, 2005 at 08:50 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, October 02, 2005

Interviewing
Recent events have placed me in a position where I will be interviewing people for open positions on my team. Not having experience with such a thing, my first reaction was to set the monkey research squad loose on the subject. As usual, they didn't disappoint.
  • The Guerrilla Guide to Interviewing by Joel Spolsky: I think this is the best article I found. All of Spolsky's stuff is great, and his interviewing style is pretty staightforward: He looks for people who are "Smart, and Gets Things Done". One thing to note about a lot of the interviewing advice on the internet is that it's almost all focused on hiring programmers. In my case, though, I'm not looking for something that technical (though there is some coding involved). I'm looking for someone with usability experience (generally not programmers there) and someone who can work with the business group to write a good requirements document for the programmers. However, a lot of interviewing isn't focused on the technical details of coding, and Spolsky has a few gems in his article. Here's one I found useful:
    Part 6: the design question. Ask the candidate to design something. Jabe Blumenthal, the original designer of Excel, liked to ask candidates to design a house. According to Jabe, he's had candidates who would go up to the whiteboard and immediately draw a square. A square! These were immediate No Hires. In design questions, what are you looking for?

    Good candidates will try to get more information out of you about the problem. Who is the house for? As a policy, I will not hire someone who leaps into the design without asking more about who it's for. Often I am so annoyed that I will give them a hard time by interrupting them in the middle and saying, "actually, you forgot to ask this, but this is a house for a family of 48-foot tall blind giraffes."
    There's a lot more to it than just that, but as I'm looking for someone to work with the business group to write a requirements document, it's pretty important that they ask questions and try to get more details. Lots of people like to ask for specific technologies, etc... even though such specifics might not be what they really want. The important thing is to find out what they really want to do, then figure out how to best achieve that goal. I don't know if I'd be as picky about this sort of question as Joel though. I do ask a design question in the interview, but I've only done one interview, and the guy didn't get it. I'll be interested to see if this sort of design quesiton actually does become a good indicator.
  • How to Hire Like a Start-Up by Rob Walling: Not quite as good or thorough as Spolsky's article, but still filled with solid insight on the hiring process from a slightly different perspective. His article focuses on hiring fast. In Joel's article, there is only Hire and No Hire. Rob has an extra category: Maybes:
    ...the Rule of Thirds: on a 10-point scale you make money with your 7s, 8s, and 9s, break even with your 4s, 5s, and 6s, and lose money with your 1s, 2s, and 3s. There are no 10s in that list since no one is perfect; the highest possible rating is a 9+.

    In every job search there are hires, maybes, and no-hires. Using the Rule of Thirds, 7-9 is a hire, 4-6 is a maybe, and 1-3 is a no-hire.

    The only difference between hiring slow and hiring fast is what you do with the maybes; when hiring slow the maybes become nos, when hiring fast you let the maybes proceed to the next round of evaluation.
  • Microsoft Interview Questions: A blog that started as a collection of interview questions asked by Microsoft, but that has lots of general interviewing stuff as well.
  • The New-Boy Network by Malcolm Gladwell: Now that we've got a good handle on how to interview, Gladwell comes along and pulls the rug out from underneath us. Just how valuable are interviews anyway? Gladwell looks at the situation in his usual thorough manner, and claims that interviewing is a lot more difficult than it seems. Most judgements appear to be based on first impressions and the assumption that people's reactions in one context (the interview) are the same as others (working). However, once he establishes that premise, he goes on to talk about "structured interviewing" with an HR expert:
    Menkes moved on to another area--handling stress. A typical question in this area is something like "Tell me about a time when you had to do several things at once. How did you handle the situation? How did you decide what to do first?" Menkes says this is also too easy. "I just had to be very organized," he began again in his mock-sincere singsong. "I had to multitask. I had to prioritize and delegate appropriately. I checked in frequently with my boss." Here's how Menkes rephrased it: "You're in a situation where you have two very important responsibilities that both have a deadline that is impossible to meet. You cannot accomplish both. How do you handle that situation?"

    "Well," I said, "I would look at the two and decide what I was best at, and then go to my boss and say, 'It's better that I do one well than both poorly,' and we'd figure out who else could do the other task."

    Menkes immediately seized on a telling detail in my answer. I was in-terested in what job I would do best. But isn't the key issue what job the company most needed to have done? With that comment, I had revealed some-thing valuable: that in a time of work-related crisis I start from a self-centered consideration.
    Most of the time, we want to believe that we can derive broad trend of behavior from the interview. The structured interviewing process is very narrowly focused:
    What is interesting about the structured interview is how narrow its objectives are. When I interviewed Nolan Myers I was groping for some kind of global sense of who he was; Menkes seemed entirely uninterested in arriving at that same general sense of me--he seemed to realize how foolish that expectation was for an hour-long interview. The structured interview works precisely because it isn't really an interview; it isn't about getting to know someone, in a traditional sense. It's as much concerned with rejecting information as it is with collecting it.
Interesting stuff. As I mentioned, I've not progressed much in the process just yet, but I'll be interested to see how this information plays out.
Posted by Mark on October 02, 2005 at 04:29 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, September 04, 2005

The Pendulum Swings
I've often commented that human beings don't so much solve problems as they trade one set problems for another (in the hope that the new set of problems are more favorable than the old). Yet that process doesn't always follow a linear trajectory. Initial reactions to a problem often cause problems of their own. Reactions to those problems often take the form of an over-correction. And so it continues, like the swinging of a pendulum, back and forth, until it reaches it's final equilibrium.

This is, of course, nothing new. Hegel's philosophy of argument works in exactly that way. You start with a thesis, some sort of claim that becomes generally accepted. Then comes the antithesis, as people begin to find holes in the original thesis and develop an alternative. For a time, the thesis and antithesis vie to establish dominance, but neither really wins. In the end, a synthesis comprised of the best characteristics of the thesis and antithesis emerges.

Naturally, it's rarely so cut and dry, and the process continues as the synthesis eventually takes on the role of the thesis, with new antitheses arising to challenge it. It works like a pendulum, oscillating back and forth until it reaches a stable position (a new synthesis). There are some interesting characteristics of pendulums that are also worth noting in this context. Steven Den Beste once described the two stable states of the pendulum: one in which the weight hangs directly below the hinge, and one in which the weight is balanced directly above the hinge.
On the left, the weight hangs directly below the hinge. On the right, it's balanced directly above it. Both states are stable. But if you slightly perturb the weight, they don't react the same way. When the left weight is moved off to the side, the force of gravity tries to center it again. In practice, if the hinge has a good bearing, the system then will oscillate around the base state and eventually stop back where it started. But if the right weight is perturbed, then gravity pulls the weight away and the right system will fail and convert to the left one.

The left state is robust. The right state is fragile. The left state responds to challenges by trying to maintain itself; the right state responds to challenges by shattering.
Not all systems are robust, but it's worth noting that even robust systems are not immune to perturbation. The point isn't that they can't fail, it's that when they do fail, they fail gracefully. Den Beste applies the concept to all sorts of things, including governments and economic systems, and I think the analogy is apt. In the coming months and years, we're going to see a lot of responses to the tragedy of hurricane Katrina. Katrina represents a massive perturbation; it's set the pendulum swinging, and it'll be a while before it reaches it's resting place. There will be many new policies that will result. Some of them will be good, some will be bad, and some will set new cycles into action. Disaster preparedness will become more prevalent as time goes on, and the plans will get better too. But not all at once, because we don't so much solve problems as trade one set of disadvantages for another, in the hopes that we can get that pendulum to rest in it's stable state.

Glenn Reynolds has collected a ton of worthy places to donate for hurricane relief here. It's also worth noting that many employers are matching donations to the Red Cross (mine is), so you might want to go that route if it's available...
Posted by Mark on September 04, 2005 at 11:02 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, August 21, 2005

Mastery II
I'm currently reading Vernor Vinge's A Deepness in the Sky. It's an interesting novel, and there are elements of the story that resemble Vinge's singularity. (Potential spoilers ahead) The story concerns two competing civilizations that travel to an alien planet. Naturally, there are confrontations and betrayals, and we learn that one of the civilizations utilizes a process to "Focus" an individual on a single area of study, essentially turning them into a brilliant machine. Naturally, there is a lot of debate about the Focused, and in doing so, one of the characters describes it like this:
... you know about really creative people, the artists who end up in your history books? As often as not, they're some poor dweeb who doesn't have a life. He or she is just totally fixated on learning everything about some single topic. A sane person couldn't justify losing friends and family to concentrate so hard. Of course, the payoff is that the dweeb may find things or make things that are totally unexpected. See, in that way, a little of Focus has always been part of the human race. We Emergents have simply institutionalized this sacrifice so the whole community can benefit in a concentrated, organized way.
Debate revolves around this concept because people living in this Focused state could essentially be seen as slaves. However, the quote above reminded me of a post I wrote a while ago called Mastery:
There is an old saying "Jack of all trades, Master of none." This is indeed true, though with the demands of modern life, we are all expected to live in a constant state of partial attention and must resort to drastic measures like Self-Censorship or information filtering to deal with it all. This leads to an interesting corollary for the Master of a trade: They don't know how to do anything else!
In that post, I quoted Isaac Asimov, who laments that he's clueless when it comes to cars, and relates a funny story about what happened when he once got a flat tire. I wondered if that sort of mastery was really a worthwhile goal, but the artificually induced Focus in Vinge's novel opens the floor up to several questions. Would you volunteer to be focused in a specific area of study, knowing that you would basically do that and only that? No family, no friends, but only because you are so focused on your studies (as portrayed in the novel, doing work in your field is what makes you happy). What if you could opt to be focused for a limited period of time?

There are a ton of moral and ethical questions about the practice, and as portrayed in the book, it's not a perfect process and may not be reversible (at least, not without damage). The rewards would be great - Focusing sounds like a truly astounding feat. But would it really be worth it? As portrayed in the book, it definitely would not, as those wielding the power aren't very pleasant. Because the Focused are so busy concentrating on their area of study, they become completely dependent on the non-Focused to guide them (it's possible for a Focused person to become too-obsessed with a problem, to the point where physical harm or even death can occur) and do everything else for them (i.e. feed them, clean them, etc...) Again, in the book, those who are guiding the Focused are ruthless exploiters. However, if you had a non-Focused guide who you trusted, would you consider it?

I still don't know that I would. While the results would surely be high quality, the potential for abuse is astounding, even when it's someone you trust that is pulling the strings. Nothing says they'll stay trustworthy, and it's quite possible that they could be replaced in some way by someone less trustworthy. If the process was softened to the point where the Focused retains at least some control over their focus (including the ability to go in and out), then this would probably be a more viable option. Fortunately, I don't see this sort of thing happening in the way proposed by the book, but other scenarios present interesting dilemmas as well...
Posted by Mark on August 21, 2005 at 09:25 PM .: Comments (0) | link :.


End of This Day's Posts

Sunday, June 19, 2005

Veg Out
Neal Stephenson's take on Star Wars: Episode III - Revenge of the Sith in the New York times is interesting on a few levels. He makes some common observations, such as the prevalence of geeky details in supplementary material of the Star Wars universe (such as the Clone Wars cartoons or books), but the real gem is his explanation for why the geeky stuff is mostly absent from the film:
Modern English has given us two terms we need to explain this phenomenon: "geeking out" and "vegging out." To geek out on something means to immerse yourself in its details to an extent that is distinctly abnormal - and to have a good time doing it. To veg out, by contrast, means to enter a passive state and allow sounds and images to wash over you without troubling yourself too much about what it all means.
Stephenson says the original Star Wars is a mixture of veg and geek scenes, while the new movies are almost all veg out material. The passive vegging out he describes is exactly how I think of the prequels (except that Episode III seems to have a couple of non-veg out scenes, which is one of the reasons I think it fares better than the other prequels). He also makes a nice comparison to the business world, but then takes a sudden sort of indirect dive towards outsourcing and pessimism at the end of the article, making a vague reference to going "the way of the old Republic."

I'm not sure I agree with those last few paragraphs. I see the point, but it's presented as a given. Many have noted Stephenson could use a good editor for his recent novels, and it looks to me like Stephenson was either intentionally trying to keep it short (it's only two pages - not what you'd expect from someone who routinely writes 900 page books, including three that are essentially a single 2700 page novel) or his article was edited down to fit somewhere. In either case, I'm sure he could have expounded upon those last paragraphs to the tune of a few thousand words, but that's what I like about the guy. Not that the article is bad, but I prefer Stephenson's longwinded style. Ironically, Stephenson has left the details out of his article; it reads more like a power-point presentation that summarizes the bullet points of his argument than the sort of in-depth analysis I'm used to from Stephenson. As such, I'm sure there are a lot of people who would take issue with some of his premises. Perhaps it's an intentional irony, or (more likely) I'm reading too much into it.
Posted by Mark on June 19, 2005 at 10:19 AM .: link :.


End of This Day's Posts

Sunday, June 05, 2005

Link Dump
Time is short this week, so I'll just have to rely on my army of chain smoking monkey researchers for a few links:
  • The Singularity: Vernor Vinge's take on the Singularity. He predicts that we'll have the technology to create a super-human intelligence within 30 years, and that once the transition is made, " the human era will be ended." The concept has been around a while (and Vinge has written a pair of novels focusing on those ideas, amongst other things) and this 1993 essay is a good introduction.
  • A Gamer's Manifesto: David Wong and Haimoimoi deliver a hearth breaking list of things gamers really want out of their games. Astute readers may remember Wong's brilliant Ultimate War Sim. The manifesto isn't quite as funny, but it does nail the frustrating things about gaming right on the head. And, of course, with all the calls for better AI, they're just begging for the Singularity...
  • Interview with Umberto Eco in the Telegraph: The topics include a comparison of Foucault's Pendulum and The Da Vinci Code (big difference) and whether Eco is the Italian Salman Rushdie (no). Interestingly enough, before I knew what the Singularity was, I had thought that "Abufalafia" (the allegedly "incredible" computer from Eco's Foucault's Pendulum) would comprise of just such an intelligence. Alas, once I read the book, I realized it was not to be... [via Johno at The Ministry of Minor Perfidy]
  • Zak Smith�s Illustrations For Each Page of Gravity�s Rainbow: "The Modern Word hosts a staggering 755 illustrations by New York based artist, Zak Smith, depicting the events and imagery of Pynchon�s magnum opus." Some interesting stuff there... I'll have to add a link to my review. [via William Pittsburg, who apparently wrote the introduction and coded the pages]
That's all for now, perhaps more early in the week...

Update: Added another link and some text...
Posted by Mark on June 05, 2005 at 09:57 PM .: link :.


End of This Day's Posts

Sunday, May 29, 2005

Sharks, Deer, and Risk
Here's a question: Which animal poses the greater risk to the average person, a deer or a shark?

Most people's initial reaction (mine included) to that question is to answer that the shark is the more dangerous animal. Statistically speaking, the average American is much more likely to be killed by deer (due to collisions with vehicles) than by a shark attack. Truly accurate statistics for deer collisions don't exist, but estimates place the number of accidents in the hundreds of thousands. Millions of dollars worth of damage are caused by deer accidents, as are thousands of injuries and hundreds of deaths, every year.

Shark attacks, on the other hand, are much less frequent. Each year, approximately 50 to 100 shark attacks are reported. "World-wide, over the past decade, there have been an average of 8 shark attack fatalities per year."

It seems clear that deer actually pose a greater risk to the average person than sharks. So why do people think the reverse is true? There are a number of reasons, among them the fact that deer don't intentionally cause death and destruction (not that we know of anyway) and they are also usually harmed or killed in the process, while sharks directly attack their victims in a seemingly malicious manner (though I don't believe sharks to be malicious either).

I've been reading Bruce Schneier's book, Beyond Fear, recently. It's excellent, and at one point he draws a distinction between what security professionals refer to as "threats" and "risks."
A threat is a potential way an attacker can attack a system. Car burglary, car theft, and carjacking are all threats ... When security professionals talk abour risk, they take into consideration both the likelihood of the threat and the seriousness of a successful attack. In the U.S., car theft is a more serious risk than carjacking because it is much more likely to occur.
Everyone makes risk assessments every day, but most everyone also has different tolerances for risk. It's essentially a subjective decision, and it turns out that most of us rely on imperfect heuristics and inductive reasoning when it comes to these sorts of decisions (because it's not like we have the statistics handy). Most of the time, these heuristics serve us well (and it's a good thing too), but what this really ends up meaning is that when people make a risk assessment, they're basing their decision on a perceived risk, not the actual risk.

Schneier includes a few interesting theories about why people's perceptions get skewed, including this:
Modern mass media, specifically movies and TV news, has degraded our sense of natural risk. We learn about risks, or we think we are learning, not by directly experiencing the world around us and by seeing what happens to others, but increasingly by getting our view of things through the distorted lens of the media. Our experience is distilled for us, and it’s a skewed sample that plays havoc with our perceptions. Kids try stunts they’ve seen performed by professional stuntmen on TV, never recognizing the precautions the pros take. The five o’clock news doesn’t truly reflect the world we live in -- only a very few small and special parts of it.

Slices of life with immediate visual impact get magnified; those with no visual component, or that can’t be immediately and viscerally comprehended, get downplayed. Rarities and anomalies, like terrorism, are endlessly discussed and debated, while common risks like heart disease, lung cancer, diabetes, and suicide are minimized.
When I first considered the Deer/Shark dilemma, my immediate thoughts turned to film. This may be a reflection on how much movies play a part in my life, but I suspect some others would also immediately think of Bambi, with it's cuddly cute and innocent deer, and Jaws, with it's maniacal great white shark. Indeed, Fritz Schranck once wrote about these "rats with antlers" (as some folks refer to deer) and how "Disney's ability to make certain animals look just too cute to kill" has deterred many people from hunting and eating deer. When you look at the deer collision statistics, what you see is that what Disney has really done is to endanger us all!

Given the above, one might be tempted to pursue some form of censorship to keep the media from degrading our ability to determine risk. However, I would argue that this is wrong. Freedom of speech is ultimately a security measure, and if we're to consider abridging that freedom, we must also seriously consider the risks of that action. We might be able to slightly improve our risk decisionmaking with censorship, but at what cost?

Schneier himself recently wrote about this subject on his blog. In response to an article which argues that suicide bombings in Iraq shouldn't be reported (because it scares people and it serves the terrorists' ends). It turns out, there are a lot of reasons why the media's focus on horrific events in Iraq cause problems, but almost any way you slice it, it's still wrong to censor the news:
It's wrong because the danger of not reporting terrorist attacks is greater than the risk of continuing to report them. Freedom of the press is a security measure. The only tool we have to keep government honest is public disclosure. Once we start hiding pieces of reality from the public -- either through legal censorship or self-imposed "restraint" -- we end up with a government that acts based on secrets. We end up with some sort of system that decides what the public should or should not know.
Like all of security, this comes down to a basic tradeoff. As I'm fond of saying, human beings don't so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Risk can be difficult to determine, and the media's sensationalism doesn't help, but censorship isn't a realistic solution to that problem because it introduces problems of its own (and those new problems are worse than the one we're trying to solve in the first place). Plus, both Jaws and Bambi really are great movies!
Posted by Mark on May 29, 2005 at 08:50 PM .: link :.


End of This Day's Posts

Sunday, May 22, 2005

Voters and Lurkers
Debating online, whether it be through message boards or blogs or any other method, can be rewarding, but it can also be quite frustrating. When most people think of a debate, they think of a group arguing an opponent, and one of the two factions "winning" the argument. It's a process of expression in which different people with different points of view will express their opinions, and are criticised by one another.

I've often found that specific threads tend to boil down to a point where the argument is going back and forth between two sole debaters (with very few interruptions from others). Inevitably, the debate gets to the point where both sides' assumptions (or axioms) have been exposed, and neither side is willing to agree with the other. To the debaters, this can be intensely frustrating. As such, anyone who has spent a significant amount of time debating others online can usually see that they're probably never going to convince their opponents. So who wins the argument?

The debaters can't decide who wins - they obviously think their argument is better than their opponents (or, at the very least, are unwilling to admit it) and so everyone thinks that they "won." But the debaters themselves don't "win" an argument, it's the people witnessing the debate that are the real winners. They decide which arguments are persuasive and which are not.

This is what the First Amendment of the US Constitution is based on, and it is a fundamental part of our democracy. In a vigorous marketplace of ideas, the majority of voters will discern the truth and vote accordingly.

Unfortunately, there never seems to be any sort of closure when debating online, because the audience is primarily comprised of lurkers, most of whom don't say anything (plus, there are no votes), and so it seems like nothing is accomplished. However, I assure you that is not the case. Perhaps not for all lurkers, but for a lot of them, they are reading the posts with a critical eye and coming out of the debate convinced one way or the other. They are the "voters" in an online debate. They are the ones who determine who won the debate. In a scenario where only 10-15 people are reading a given thread, this might not seem like much (and it's not), but if enough of these threads occur, then you really can see results...

I'm reminded of Benjamin Franklin's essay "An apology for printers," in which Franklin defended those who printed allegedly offensive opinion pieces. His thought was that very little would be printed if publishers only produced things that were not offensive to anybody.
Printers are educated in the Belief, that when Men differ in Opinion, both sides ought equally to have the Advantage of being heard by the Public; and that when Truth and Error have fair Play, the former is always an overmatch for the latter.
Posted by Mark on May 22, 2005 at 06:58 PM .: link :.


End of This Day's Posts

Sunday, May 08, 2005

Family Guy
It's back! Last week was the first new episode, and things appear to be going well. I remember watching the reruns on the Cartoon Network and cursing FOX for cancelling it. How could they do such a thing?

I have this theory about Family Guy. You see, it's almost too funny. It makes you laugh so much that you forget what was so funny in the first place. And because many of the funny bits are almost completely unrelated to the story (inasmuch as there is a story), it's not like you can remember much by figuring it out from the plot. So all anyone remembers about Family Guy is that it's funny. This apparent amnesia includes the airing date, which during the initial run of Family Guy was all over the place (Sunday, Thursday, Tuesday?). Upon repeated viewings, it becomes easier. Or I'm just a moron who can't remember stuff when he laughs.

American Dad has been less impressive, I think perhaps because it mostly eschews the cutscene/flashback formula of Family Guy. However, I'm an optimist, so I'm willing to give them a chance to flesh it out a bit. I don't think it's as bad as Jeremy Bowers does, but I share his apprehension about Seth McFarlane spreading himself too thin:
I remember when Scott Adams, the author of Dilbert, spread himself too thin with the cartoon and the TV show. I don't have a reference for the quality of the cartoon show without the cartoon, but during the run of the TV show, the quality of the cartoon really took a nose-dive. Most Dilbert daily cartoons before the TV show had effectively two punchlines in the final panel, something that once I noticed really made me respect him, given the constraints of the medium. Other cartoons certainly do it when they can, but Scott Adams pulled it off routinely after his first few years. As he worked on the TV show, the punchline count dropped to an average of one, and it was usually of a lower quality to boot. Now that he's back to just working on the strip, its quality has increased again ...

... I don't know how much Seth McFarlane is in Family Guy; sometimes the creative guy drives the whole show, sometimes he just sets up a good thing that can live on without him. But if it is the former, I hope that Family Guy doesn't suffer for the involvement in American Dad, or McFarlane may lose big by having two mediocre (and subsequently cancelled) shows, instead of one good one.
My thought is that McFarlane does indeed drive the whole show (though I'm not sure about American Dad), but I am again optimistic, for some unspecified reason.
Posted by Mark on May 08, 2005 at 09:59 PM .: link :.


End of This Day's Posts

Sunday, March 27, 2005

Accelerating Change
Slashdot links to a fascinating and thought provoking one hour (!) audio stream of a speech "by futurist and developmental systems theorist, John Smart." The talk is essentially about the future of technology, more specifically information and communication technology. Obviously, there is a lot of speculation here, but it is interesting so long as you keep it in the "speculation" realm. Much of this is simply a high-level summary of the talk with a little commentary sprinkled in.

He starts by laying out some key motivations or guidelines to thinking about this sort of thing, and he paraphrases David Brin (and this is actually paraphrasing Smart):
We need a pragmatic optimism, a can-do attitude, a balance between innovation and preservation, honest dialogue on persistent problems, ... tolerance of the imperfect solutions we have today, and the ability to avoid both doomsaying and a paralyzing adherence to the status quo. ... Great input leads to great output.
So how do new systems supplant the old? They do useful things with less matter, less energy, and less space. They do this until they reach some sort of limit along those axes (a limitation of matter, energy, or space). It turns out that evolutionary processes are great at this sort of thing.

Smart goes on to list three laws of information and communication technology:
  1. Technology learns faster than you do (on the order of 10 million times faster). At some point, Smart speculates that there will be some sort of persistent Avatar (neural-net prosthesis) that will essentially mimic and predict your actions, and that the "thinking" it will do (pattern recognitions, etc...) will be millions of times faster than what our brain does. He goes on to wonder what we will look like to such an Avatar, and speculates that we'll be sort of like pets, or better yet, plants. We're rooted in matter, energy, and space/time and are limited by those axes, but our Avatars will have a large advantage, just as we have a large advantage over plants in that respect. But we're built on top of plants, just as our Avatars will be built on top of us. This opens up a whole new can of worms regarding exactly what these Avatars are, what is actually possible, and how they will be perceived. Is it possible for the next step in evolution to occur in man-made (or machine-made) objects? (This section is around 16:30 in the audio)
  2. Human beings are catalysts rather than controllers. We decide which things to accelerate and which to slow down, and this is tremendously important. There are certain changes that are evolutionarily inevitable, but the path we take to reach those ends is not set and can be manipulated. (This section is around 17:50 in the audio)
  3. Interface is extremely important and the goal should be a natural high-level interface. His example is calculators. First generation calculators simply automate human processes and take away your math skills. Second generation calculators like Mathematica allow you to get a much better look at the way math works, but the interface "sucks." Third generation calculators will have a sort of "deep, fluid, natural interface" that allows a kid to have the understanding of a grad student today. (This section is around 20:00 in the audio)
Interesting stuff. His view is that most social and technological advances of the last 75 years or so are more accelerating refinements (changes in the microcosm) rather than disruptive changes (changes in the macrocosm). Most new technological advances are really abstracted efficiencies - it's the great unglamorous march of technology. They're small and they're obfuscated by abstraction, thus many of the advances are barely noticed.

This about halfway through the speech, and he goes on to list many examples and he explores some more interesting concepts. Here are some bits I found interesting.
  • He talks about transportation and energy, and he argues that even though, on a high level we haven't advanced much (still using oil, natural gas - fossil fuels), there has actually been a massive amount of change, but that the change is mostly hidden in abstracted accelerating efficiencies. He mentions that we will probably have zero-emission fossil fuel vehicles 30-40 years from now (which I find hard to believe) and that rather than focusing on hydrogen or solar, we should be trying to squeeze more and more efficiency out of existing systems (i.e. abstracted efficiencies). He also mentions population growth as a variable in the energy debate, something that is rarely done, but if he is correct that population will peak around 2050 (and that population density is increasing in cities), then that changes all projections about energy usage as well. (This section is around 31:50-35 in the audio) He talks about hybrid technologies and also autonomous highways as being integral in accelerating efficiencies of energy use (This section is around 37-38 in the audio) I found this part of the talk fascinating because energy debates are often very myopic and don't consider things outside the box like population growth and density, autonomous solutions, phase shifts of the problem, &c. I'm reminded of this Michael Crichton speech where he says:
    Let's think back to people in 1900 in, say, New York. If they worried about people in 2000, what would they worry about? Probably: Where would people get enough horses? And what would they do about all the horseshit? Horse pollution was bad in 1900, think how much worse it would be a century later, with so many more people riding horses?
    None of which is to say that we shouldn't be pursuing alternative energy technology or that it can't supplant fossil fuels, just that things seem to be trending towards making fossil fuels more efficient. I see hybrid technology becoming the major enabler in this arena, possibly followed by the autonomous highway (that controls cars and can perhaps give an extra electric boost via magnetism). All of which is to say that the future is a strange thing, and these systems are enormously complex and are sometimes driven by seemingly unrelated events.
  • He mentions an experiment in genetic algorithms used for process automation. Such evolutionary algorithms are often used in circuit design and routing processes to find the most efficient configuration. He mentions one case where someone made a mistake in at the quantum level of a system, and when they used the genetic algorithm to design the circuit, they found that the imperfection was actually exploited to create a better circuit. These sorts of evolutionary systems are robust because failure actually drives the system. It's amazing. (This section is around 47-48 in the audio)
  • He then goes on to speculate as to what new technologies he thinks will represent disruptive change. The first major advance he mentions is the development of a workable LUI - a language-based user interface that utilizes a natural language that is easily understandable by both the average user and the computer (i.e. a language that doesn't require years of study to figure out, a la current programming languages). He thinks this will grow out of current search technologies (perhaps in a scenario similar to EPIC). One thing he mentions is that the internet right now doesn't give an accurate represtenation of the wide range of interests and knowledge that people have, but that this is steadily getting better over time. As more and more individuals, with more and more knowledge, begin interacting on the internet, they begin to become a sort of universal information resource. (This section is around 50-53 in the audio)
  • The other major thing he speculates about is the development of personality capture and parallel computing, which sort of integrates with the LUI. This is essentially the Avatar I mentioned earlier which mimics and predicts your actions.
As always, we need to keep our feet on the ground here. Futurists are fun to listen to, but it's easy to get carried away. The development of a LUI and a personality capture system would be an enormous help, but we still need good information aggregation and correlation systems if we're really going to progress. Right now the problem is finding the information we need, and analyzing the information. A LUI and personality capture system will help with the finding of information, but not so much with the analysis (the separating of the signal from the noise). As I mentioned before, the speech is long (one hour), but it's worth a listen if you have the time...
Posted by Mark on March 27, 2005 at 08:40 PM .: link :.


End of This Day's Posts

Sunday, February 20, 2005

The Stability of Three
One of the things I've always respected about Neal Stephenson is his attitude (or rather, the lack thereof) regarding politics:
Politics - These I avoid for the simple reason that artists often make fools of themselves, and begin to produce bad art, when they decide to get political. A novelist needs to be able to see the world through the eyes of just about anyone, including people who have this or that set of views on religion, politics, etc. By espousing one strong political view a novelist loses the power to do this. Anyone who has convinced himself, based on reading my work, that I hold this or that political view, is probably wrong. What is much more likely is that, for a while, I managed to get inside the head of a fictional character who held that view.
Having read and enjoyed several of his books, I think this attitude has served him well. In a recent interview in Reason magazine, Stephenson makes several interesting observations. The whole thing is great, and many people are interested in his comments regarding an American technology and science, but I found one other tidbit very interesting. Strictly speaking, it doesn't break with his attitude about politics, but it is somewhat political:
Speaking as an observer who has many friends with libertarian instincts, I would point out that terrorism is a much more formidable opponent of political liberty than government. Government acts almost as a recruiting station for libertarians. Anyone who pays taxes or has to fill out government paperwork develops libertarian impulses almost as a knee-jerk reaction. But terrorism acts as a recruiting station for statists. So it looks to me as though we are headed for a triangular system in which libertarians and statists and terrorists interact with each other in a way that I’m afraid might turn out to be quite stable.
I took particular note of what he describes as a "triangular system" because it's something I've seen before...

One of the primary goals of the American Constitutional Convention was to devise a system that would be resistant to tyranny. The founders were clearly aware of the damage that an unrestrained government could do, so they tried to design the new system in such a way that it wouldn't become tyrannical. Democratic institions like mandatory periodic voting and direct accountability to the people played a large part in this, but the founders also did some interesting structural work as well.

Taking their cue from the English Parliament's relationship with the King of England, the founders decided to create a legislative branch separate from the executive. This, in turn, placed the two governing bodies in competition. However, this isn't a very robust system. If one of the governing bodies becomes more powerful than the other, they can leverage their advantage to accrue more power, thus increasing the imbalance.

A two-way balance of power is unstable, but a three-way balance turns out to be very stable. If any one body becomes more powerful than the other two, the two usually can and will temporarily unite, and their combined power will still exceed the third. So the founders added a third governing body, an independent judiciary.

The result was a bizarre sort of stable oscillation of power between the three major branches of the federal government. Major shifts in power (such as wars) disturbed the system, but it always fell back to a preferred state of flux. This stable oscillation turns out to be one of the key elements of Chaos theory, and is referred to as a strange attractor. These "triangular systems" are particularly good at this, and there are many other examples...

Some argue that the Cold War stabilized considerably when China split from the Soviet Union. Once it became a three-way conflict, there was much less of a chance of unbalance (and as unbalance would have lead to nuclear war, this was obviously a good thing).

Steven Den Beste once noted this stabilizing power of three in the interim Iraqi constitution, where the Iraqis instituted a Presidency Council of 3 Presidents representing each of the 3 major factions in Iraq:
...those writing the Iraqi constitution also had to create a system acceptable to the three primary factions inside of Iraq. If they did not, the system would shake itself to pieces and there was a risk of Iraqi civil war.

The divisions within Iraq are very real. But this constitution takes advantage of the fact that there are three competing factions none of which really trusts the other. This constitution leverages that weakness, and makes it into a strength.
It should be interesting to see if that structure will be maintained in the new Iraqi constitution.

As for Stephenson's speculation that a triangular system consisting of libertarians, statists, and terrorists may develop, I'm not sure. They certainly seem to feed off one another in a way that would facilitate such a system, but I'm not positive it would work out that way, nor do I think it is particularly a desirable state to be in, all the more because it could be a very stable system due to its triangular structure. In any case, I thought it was an interesting observation and well worth considering...
Posted by Mark on February 20, 2005 at 08:06 PM .: link :.


End of This Day's Posts

Sunday, February 06, 2005

Stupendous Badass
Time is tight this week, so just a few quick quotes from Neal Stephenson's Cryptonomicon which struck me during a recent re-reading. The first is essentially a summary of evolution:
Let's set the existence-of-God issue aside for a later volume, and just stipulate that in some way, self-replicating organisms came into existence on this planet and immediately began trying to get rid of each other, either by spamming their environments with rough copies of themselves, or by more direct means which hardly need to be belabored. Most of them failed, and their genetic legacy was erased from the universe forever, but a few found some way to survive and to propagate. After about three billion years of this sometimes zany, frequently tedious fugue of carnality and carnage, Godfrey Waterhouse IV was born, in Murdo, South Dakota, to Blanche, the wife of a Congregational preacher named Bunyan Waterhouse. Like every other creature on the face of the earth, Godfrey was, by birthright, a stupendous badass, albeit in the somewhat narrow technical sense that he could trace his ancestry back up a long line of slightly less highly evolved stupendous badasses to that first self-replicating gizmo - which, given the number and variety of its descendants, might justifiably be described as the most stupendous badass of all time. Everyone and everything that wasn't a stupendous badass was dead. As nightmarishly lethal, memetically programmed death-machines went, these were the nicest you could ever hope to meet.
And the next quote comes from the perspective of Goto Dengo, a Japanese soldier during World War II:
The Americans have invented a totally new bombing tactic in the middle of a war and implemented it flawlessly. His mind staggers like a drunk in the aisle of a careening train. They saw that they were wrong, they admitted their mistake, they came up with a new idea. The new idea was accepted and embraced all the way up the chain of command. Now they are using it to kill their enemies.

No warrior with any concept of honor would have been so craven. So flexible. What a loss of face it must have been for the officers who had trained their men to bomb from high altitudes. What has become of those men? They must have all killed themselves, or perhaps been thrown into prison.
Most of you reading this know that the officers who displayed some adaptability (to borrow another phrase from Stephenson) didn't kill themselves, nor were they thrown into prison. They were most likely applauded for their efforts. But Goto Dengo, and the Japanese at the time, embraced a warrior culture where such actions were deeply dishonorable.

It's interesting to consider the second quote in light of the first. In a sense, a war is an implementation of what Stephenson describes as self-replicating organisms "trying to get rid of each other." So the question is what part do honor and flexibility play in the grand evolutionary scheme of things?
Posted by Mark on February 06, 2005 at 11:45 PM .: link :.


End of This Day's Posts

Sunday, January 16, 2005

Chasing the Tail
The Long Tail by Chris Anderson : An excellent article from Wired that demonstrates a few of the concepts and ideas I've been writing about recently. One such concept is well described by Clay Shirky's excellent article Power Laws, Weblogs, and Inequality. A system governed by a power law distribution is essentially one where the power (whether it be measured in wealth, links, etc) is concentrated in a small population (when graphed, the rest of the population's power values resemble a long tail). This concentration occurs spontaneously, and it is often strengthened because members of the system have an incentive to leverage their power to accrue more power.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.
As such, this distribution manifests in all sorts of human endeavors, including economics (for the accumulation of wealth), language (for word frequency), weblogs (for traffic or number of inbound links), genetics (for gene expression), and, as discussed in the Wired article, entertainment media sales. Typically, the sales of music, movies, and books follow a power law distribution, with a small number of hit artists who garner the grand majority of the sales. The typical rule of thumb is that 20% of available artists get 80% of the sales.

Because of the expense of producing the physical product, and giving it a physical point of sale (shelf-space, movie theaters, etc...), this is bad news for the 80% of artists who get 20% of the sales. Their books, movies, and music eventually go out of print and are generally forgotten, while the successful artists' works are continually reprinted and sold, building on their own success.

However, with the advent of the internet, this is beginning to change. Sales are still governed by the power law distribution, but the internet is removing the physical limitations of entertainment media.
An average movie theater will not show a film unless it can attract at least 1,500 people over a two-week run; that's essentially the rent for a screen. An average record store needs to sell at least two copies of a CD per year to make it worth carrying; that's the rent for a half inch of shelf space. And so on for DVD rental shops, videogame stores, booksellers, and newsstands.

In each case, retailers will carry only content that can generate sufficient demand to earn its keep. But each can pull only from a limited local population - perhaps a 10-mile radius for a typical movie theater, less than that for music and bookstores, and even less (just a mile or two) for video rental shops. It's not enough for a great documentary to have a potential national audience of half a million; what matters is how many it has in the northern part of Rockville, Maryland, and among the mall shoppers of Walnut Creek, California.
The decentralized nature of the internet makes it a much better way to distribute entertainment media, as that documentary that has a potential national (heck, worldwide) audience of half a million people could likely succeed if distributed online. The infrastructure for films isn't there yet, but it has been happening more in the digital music world, and even in a hybrid space like Amazon.com, which sells physical products, but in a non-local manner. With digital media, the cost of producing and distributing entertainment media goes way down, and thus even average artists can be considered successful, even if their sales don't approach that of the biggest sellers.

The internet isn't a broadcast medium; it is on-demand, driven by each individual's personal needs. Diversity is the key, and as Shirkey's article says: "Diversity plus freedom of choice creates inequality, and the greater the diversity, the more extreme the inequality." With respect to weblogs (or more generally, websites), big sites are, well, bigger, but links and traffic aren't the only metrics for success. Smaller websites are smaller in those terms, but are often more specialized, and thus they do better both in terms of connecting with their visitors (or customers) and in providing a more compelling value to their visitors. Larger sites, by virtue of their popularity, simply aren't able to interact with visitors as effectively. This is assuming, of course, that the smaller sites do a good job. My site is very small (in terms of traffic and links), but not very specialized, so it has somewhat limited appeal. However, the parts of my site that get the most traffic are the ones that are specialized (such as the Christmas Movies page, or the Asimov Guide). I think part of the reason the blog has never really caught on is that I cover a very wide range of topics, thus diluting the potential specialized value of any single topic.

The same can be said for online music sales. They still conform to a power law distribution, but what we're going to see is increasing sales of more diverse genres and bands. We're in the process of switching from a system in which only the top 20% are considered profitable, to one where 99% are valuable. This seems somewhat counterintuitive for a few reasons:
The first is we forget that the 20 percent rule in the entertainment industry is about hits, not sales of any sort. We're stuck in a hit-driven mindset - we think that if something isn't a hit, it won't make money and so won't return the cost of its production. We assume, in other words, that only hits deserve to exist. But Vann-Adib�, like executives at iTunes, Amazon, and Netflix, has discovered that the "misses" usually make money, too. And because there are so many more of them, that money can add up quickly to a huge new market.

With no shelf space to pay for and, in the case of purely digital services like iTunes, no manufacturing costs and hardly any distribution fees, a miss sold is just another sale, with the same margins as a hit. A hit and a miss are on equal economic footing, both just entries in a database called up on demand, both equally worthy of being carried. Suddenly, popularity no longer has a monopoly on profitability.

The second reason for the wrong answer is that the industry has a poor sense of what people want. Indeed, we have a poor sense of what we want.
The need to figure out what people want out of a diverse pool of options is where self-organizing systems come into the picture. A good example is Amazon's recommendations engine, and their ability to aggregate various customer inputs into useful correlations. Their "customers who bought this item also bought" lists (and the litany of variations on that theme), more often than not, provide a way to traverse the long tail. They encourage customer participation, allowing customers to write reviews, select lists, and so on, providing feedback loops that improve the quality of recommendations. Note that none of these features was designed to directly sell more items. The focus was on allowing an efficient system of collaborative feedback. Good recommendations are an emergent result of that system. Similar features are available in the online music services, and the Wired article notes:
For instance, the front screen of Rhapsody features Britney Spears, unsurprisingly. Next to the listings of her work is a box of "similar artists." Among them is Pink. If you click on that and are pleased with what you hear, you may do the same for Pink's similar artists, which include No Doubt. And on No Doubt's page, the list includes a few "followers" and "influencers," the last of which includes the Selecter, a 1980s ska band from Coventry, England. In three clicks, Rhapsody may have enticed a Britney Spears fan to try an album that can hardly be found in a record store.
Obviously, these systems aren't perfect. As I've mentioned before, a considerable amount of work needs to be done with respect to the aggregation and correlation aspects of these systems. Amazon and the online music services have a good start, and weblogs are trailing along behind them a bit, but the nature of self-organizing systems dictates that you don't get a perfect solution to start, but rather a steadily improving system. What's becoming clear, though, is that the little guys are (collectively speaking) just as important as the juggernauts, and that's why I'm not particularly upset that my blog won't be wildly popular anytime soon.
Posted by Mark on January 16, 2005 at 08:07 PM .: link :.


End of This Day's Posts

Sunday, December 12, 2004

Stigmergic Notes
I've been doing a lot of reading and thinking about the concepts discussed in my last post. It's a fascinating, if a little bewildering, topic. I'm not sure I have a great handle on it, but I figured I'd share a few thoughts.

There are many systems that are incredibly flexible, yet they came into existence, grew, and self-organized without any actual planning. Such systems are often referred to as Stigmergic Systems. To a certain extent, free markets have self-organized, guided by such emergent effects as Adam Smith's "invisible hand". Many organisms are able to quickly adapt to changing conditions using a technique of continuous reproduction and selection. To an extent, there are forces on the internet that are beginning to self-organize and produce useful emergent properties, blogs among them.

Such systems are difficult to observe, and it's hard to really get a grasp on what a given system is actually indicating (or what properties are emerging). This is, in part, the way such systems are supposed to work. When many people talk about blogs, they find it hard to believe that a system composed mostly of small, irregularly updated, and downright mediocre (if not worse) blogs can have truly impressive emergent properties (I tend to model the ideal output of the blogosphere as an information resource). Believe it or not, blogging wouldn't work without all the crap. There are a few reasons for this:

The System Design: The idea isn't to design a perfect system. The point is that these systems aren't planned, they're self-organizing. What we design are systems which allow this self-organization to occur. In nature, this is accomplished through constant reproduction and selection (for example, some biological systems can be represented as a function of genes. There are hundreds of thousands of genes, with a huge and diverse number of combinations. Each combination can be judged based on some criteria, such as survival and reproduction. Nature introduces random mutations so that gene combinations vary. Efficient combinations are "selected" and passed on to the next generation through reproduction, and so on).

The important thing with respect to blogs are the tools we use. To a large extent, blogging is simply an extension of many mechanisms already available on the internet, most especially the link. Other weblog specific mechanisms like blogrolls, permanent-links, comments (with links of course) and trackbacks have added functionality to the link and made it more powerful. For a number of reasons, weblogs tend to be affected by power-law distribution, which spontaneously produces a sort of hierarchical organization. Many believe that such a distribution is inherently unfair, as many excellent blogs don't get the attention they deserve, but while many of the larger bloggers seek to promote smaller blogs (some even providing mechanisms for promotion), I'm not sure there is any reliable way to systemically "fix" the problem without harming the system's self-organizational abilities.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.
This self-organization is one of the important things about weblogs; any attempt to get around it will end up harming you in the long run as the important thing is to find a state in which weblogs are working most efficiently. How can the weblog community be arranged to self-organize and find its best configuration? That is what the real question is, and that is what we should be trying to accomplish (emphasis mine):
...although the purpose of this example is to build an information resource, the main strategy is concerned with creating an efficient system of collaboration. The information resource emerges as an outcome if this is successful.
Failure is Important: Self-Organizing systems tend to have attractors (a preferred state of the system), such that these systems will always gravitate towards certain positions (or series of positions), no matter where they start. Surprising as it may seem, self-organization only really happens when you expose a system in a steady state to an environment that can destabilize it. By disturbing a steady state, you might cause the system to take up a more efficient position.

It's tempting to dismiss weblogs as a fad because so many of them are crap. But that crap is actually necessary because it destabilizies the system. Bloggers often add their perspective to the weblog community in the hopes that this new information will change the way others think (i.e. they are hoping to induce change - this is roughly referred to as Stigmergy). That new information will often prompt other individuals to respond in some way or another (even if not directly responding). Essentially, change is introduced in the system and this can cause unpredictable and destabilizing effects. Sometimes this destabilization actually helps the system, sometimes (and probably more often than not) it doesn't. Irregardless of its direct effects, the process is essential because it is helping the system become increasingly comprehensive. I touched on this in my last post among several others in which I claim that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. An individual blog may fail to solve a problem, but that failure is important too when you look at the systemic level. Of course, all of this is also muddying the waters and causing the system to deteriorate to a state where it is less efficient to use. For every success story like Rathergate, there are probably 10 bizarre and absurd conspiracy theories to contend with.
This is the dilemma faced by all biological systems. The effects that cause them to become less efficient are also the effects that enable them to evolve into more efficient forms. Nature solves this problem with its evolutionary strategy of selecting for the fittest. This strategy makes sure that progress is always in a positive direction only.
So what weblogs need is a selection process that separates the good blogs from the bad. This ties in with the aforementioned power-law distribution of weblogs. Links, be they blogroll links or links to an individual post, essentially represent a sort of currency of the blogosphere and provide an essential internal feedback loop. There is a rudimentary form of this sort of thing going on, and it has proven to be very successful (as Jeremy Bowers notes, it certainly seems to do so much better than the media whose selection process appears to be simple heuristics). However, the weblog system is still young and I think there is considerable room for improvement in its selection processes. We've only hit the tip of the iceberg here. Syndication, aggregation, and filtering need to improve considerably. Note that all of those things are systemic improvements. None of them directly act upon the weblog community or the desired informational output of the community. They are improvements to the strategy of creating an efficient system of collaboration. A better informational output emerges as an outcome if the systemic improvements are successful.

This is truly a massive subject, and I'm only beginning to understand some of the deeper concepts, so I might end up repeating myself a bit in future posts on this subject, as I delve deeper into the underlying concepts and gain a better understanding. The funny thing is that it doesn't seem like the subject itself is very well defined, so I'm sure lots will be changing in the future. Below are a few links to information that I found helpful in writing this post.
Posted by Mark on December 12, 2004 at 11:15 PM .: link :.


End of This Day's Posts

Sunday, December 05, 2004

An Epic in Parallel Form
Tyler Cowen has an interesting post on the scholarly content of blogging in which he speculates as to how blogging and academic scholarship fit together. In so doing he makes some general observations about blogging:
Blogging is a fundamentally new medium, akin to an epic in serial form, but combining the functions of editor and author. Who doesn't dream of writing an epic?

Don't focus on the single post. Rather a good blog provides you a whole vision of what a field is about, what the interesting questions are, and how you might answer them. It is also a new window onto a mind. And by packaging intellectual content with some personality, bloggers appeal to the biological instincts of blog readers. Be as intellectual as you want, you still are programmed to find people more memorable than ideas.
It's an interesting perspective. Many blogs are general in subject, but some of the ones that really stand out have some sort of narrative (for lack of a better term) that you can follow from post to post. As Cowen puts it, an "epic in serial form." The suggestion that reading a single blog many times is more rewarding than reading the best posts from many different blogs is interesting. But while a single blog may give you a broad view of what a field is about, it can also be rewarding to aggregate the specific views of a wide variety of individuals, even biased and partisan individuals. As Cowen mentions, the blogosphere as a whole is the relevant unit of analysis. Even if each individual view is unimpressive on its own, that may not be the case when taken collectively. In a sense, while each individual is writing a flawed epic in serial form, they are all contributing to an epic in parallel form.

Which brings up another interesting aspect of blogs. When the blogosphere tackles a subject, it produces a diverse set of opinions and perspectives, all published independently by a network of analysts who are all doing work in parallel. The problem here is that the decentralized nature of the blogosphere makes aggregation difficult. Determining a group as large and diverse as the blogosphere's "answer" based on all of the disparate information they have produced is incredibly difficult, especially when the majority of data represents opinions of various analysts. A deficiency in aggregation is part of where groupthink comes from, but some groups are able to harness their disparity into something productive. The many are smarter than the few, but only if the many are able to aggregate their data properly.

In theory, blogs represent a self-organizing system that has the potential to evolve and display emergent properties (a sort of human hive mind). In practice, it's a little more difficult to say. I think it's clear that the spontaneous appearance of collective thought, as implemented through blogs or other communication systems, is happening frequently on the internet. However, each occurrence is isolated and only represents an incremental gain in productivity. In other words, a system will sometimes self-organize in order to analyze a problem and produce an enormous amount of data which is then aggregated into a shared vision (a vision which is much more sophisticated than anything that one individual could come up with), but the structure that appears in that case will disappear as the issue dies down. The incredible increase in analytic power is not a permanent stair step, nor is it ubiquitous. Indeed, it can also be hard to recognize the signal in a great sea of noise.

Of course, such systems are constantly and spontaneously self-organizing; themselves tackling problems in parallel. Some systems will compete with others, some systems will organize around trivial issues, some systems won't be nearly as effective as others. Because of this, it might be that we don't even recognize when a system really transcends its perceived limitations. Of course, such systems are not limited to blogs. In fact they are quite common, and they appear in lots of different types of systems. Business markets are, in part, self-organizing, with emergent properties like Adam Smith's "invisible hand". Open Source software is another example of a self-organizing system.

Interestingly enough, this subject ties in nicely with a series of posts I've been working on regarding the properties of Reflexive documentaries, polarized debates, computer security, and national security. One of the general ideas discussed in those posts is that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. Ironically, in acknowledging one's own subjectivity, one becomes more objective and reliable. This applies on an individual basis, but becomes much more powerful when it is part of an emergent system of analysis as discussed above. Blogs are excellent at this sort of thing precisely because they are made up of independent parts that make no pretense at objectivity. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. The news media represents a competing system (the journalist being the media's equivalent of the blogger), one that is much more rigid and unyielding. The interplay between blogs and the media is fascinating, and you can see each medium evolving in response to the other (the degree to which this is occurring is naturally up for debate). You might even be able to make the argument that blogs are, themselves, emergent properties of the mainstream media.

Personally, I don't think I have that exact sort of narrative going here, though I do believe I've developed certain thematic consistencies in terms of the subjects I cover here. I'm certainly no expert and I don't post nearly often enough to establish the sort of narrative that Cowen is talking about, but I do think a reader would benefit from reading multiple posts. I try to make up for my low posting frequency by writing longer, more detailed posts, often referencing older posts on similar subjects. However, I get the feeling that if I were to break up my posts into smaller, more digestible pieces, the overall time it would take to read and produce the same material would be significantly longer. Of course, my content is rarely scholarly in nature, and my subject matter varies from week to week as well, but I found this interesting to think about nonetheless.

I think I tend to be more of an aggregator than anything else, which is interesting because I've never thought about what I do in those terms. It's also somewhat challenging, as one of my weaknesses is being timely with information. Plus aggregation appears to be one of the more tricky aspects of a system such as the ones discussed above, and with respect to blogs, it is something which definitely needs some work...

Update 12.13.04: I wrote some more on the subject. I aslo made a minor edit to this entry, moving one paragraph lower down. No content has actually changed, but the new order flows better.
Posted by Mark on December 05, 2004 at 09:23 PM .: link :.


End of This Day's Posts

Sunday, November 21, 2004

Polarized Debate
This is yet another in a series of posts fleshing out ideas initially presented in a post regarding Reflexive Documentary filmmaking and the media. In short, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I expanded the scope of the concepts originally presented in that post to include a broader range of information dissemination processes, which lead to a post on computer security and a post on national security.

I had originally planned to apply the same concepts to debating in a relatively straightforward manner. I'll still do that, but recent events have lead me to reconsider my position, thus there will most likely be some unresolved questions at the end of this post.

So the obvious implication with respect to debating is that a debate can be more productive when each side exposes their own biases and agenda in making their argument. Of course, this is pretty much required by definition, but what I'm getting at here is more a matter of tactics. Debating tactics often take poor forms, with participants scoring cheap points by using intuitive but fallacious arguments.

I've done a lot of debating in various online forums, often taking a less than popular point of view (I tend to be a contrarian, and am comofortable on the defense). One thing that I've found is that as a debate heats up, the arguments become polarized. I sometimes find myself defending someone or something that I normally wouldn't. This is, in part, because a polarizing debate forces you to dispute everything your opponent argues. To concede one point irrevocably weakens your position, or so it seems. Of course, the fact that I'm a contrarian, somewhat competitive, and stubborn also plays a part this. Emotions sometimes flare, attitudes clash, and you're often left feeling dirty after such a debate.

None of which is to say that polarized debate is bad. My whole reason for participating in such debates is to get others to consider more than one point of view. If a few lurkers read a debate and come away from it confused or at least challenged by some of the ideas presented, I consider that a win. There isn't anything inherently wrong with partisanship, and as frustrating as some debates are, I find myself looking back on them as good learning experiences. In fact, taking an extreme position and thinking from that biased standpoint helps you understand not only that viewpoint, but the extreme opposite as well.

The problem with such debates, however, is that they really are divisive. A debate which becomes polarized might end up providing you with a more balanced view of an issue, but such debates sometimes also present an unrealistic view of the issue. An example of this is abortion. Debates on that topic are usually heated and emotional, but the issue polarizes, and people who would come down somewhere around the middle end up arguing an extreme position for or against.

Again, I normally chalk this polarization up as a good thing, but after the election, I'm beginning to see the wisdom in perhaps pursuing a more moderated approach. With all the red/blue dichotomies being thrown around with reckless abandon, talk of moving to Canada and even talk of secesssion(!), it's pretty obvious that the country has become overly-polarized.

I've been writing about Benjamin Franklin recently on this here blog, and I think his debating style is particularly apt to this discussion:
Franklin was worried that his fondness for conversation and eagerness to impress made him prone to "prattling, punning and joking, which only made me acceptable to trifling company." Knowledge, he realized, "was obtained rather by the use of the ear than of the tongue." So in the Junto, he began to work on his use of silence and gentle dialogue.

One method, which he had developed during his mock debates with John Collins in Boston and then when discoursing with Keimer, was to pursue topics through soft, Socratic queries. That became the preferred style for Junto meetings. Discussions were to be conducted "without fondness for dispute or desire of victory." Franklin taught his friends to push their ideas through suggestions and questions, and to use (or at least feign) naive curiousity to avoid contradicting people in a manner that could give offense. ... It was a style he would urge on the Constitutional Convention sixty years later. [This is an exerpt from the recent biography Benjamin Franklin: An American Life by Walter Isaacson]
This contrasts rather sharply with what passes for civilized debate these days. Franklin actually considered it rude to directly contradict or dispute someone, something I had always found to be confusing. I typically favor a frank exchange of ideas (i.e. saying what you mean), but I'm beginning to come around. In the wake of the election, a lot of advice has been offered up for liberals and the left, and a lot of suggestions center around the idea that they need to "reach out" to more voters. This has been recieved with indignation by liberals and leftists, and one could hardly blame them. From their perspective, conservatives and the right are just as bad if not worse and they read such advice as if they're being asked to give up their values. Irrespective of which side is right, I think the general thrust of the advice is that liberal arguments must be more persuasive. No matter how much we might want to paint the country into red and blue partitions, if you really want to be accurate, you'd see only a few small areas of red and blue drowning in a sea of purple. The Democrats don't need to convince that many people to get a more favorable outcome in the next election.

And so perhaps we should be fighting the natural polarization of a debate and take a cue from Franklin, who stressed the importance of deferring, or at least pretending to defer, to others:
"Would you win the hearts of others, you must not seem to vie with them, but to admire them. Give them every opportunity of displaying their own qualifications, and when you have indulged their vanity, they will praise you in turn and prefer you above others... Such is the vanity of mankind that minding what others say is a much surer way of pleasing them than talking well ourselves."
There are weaknesses to such an approach, especially if your opponent does not return the favor, but I think it is well worth considering. That the country has so many opposing views is not necessarily bad, and indeed, is a necessity in democracy for ideas to compete. But perhaps we need less spin and more moderation... In his essay "Apology for Printers" Franklin opines:
"Printers are educated in the belief that when men differ in opinion, both sides ought equally to have the advantage of being heard by the public; and that when Truth and Error have fair play, the former is always an overmatch for the latter."
Indeed.

Update: Andrew Olmsted posted something along these lines, and he has a good explanation as to why debates often go south:
I exaggerate for effect, but anyone spending much time on site devoted to either party quickly runs up against the assumption that the other side isn't just wrong, but evil. And once you've made that assumption, it would be wrong to even negotiate with the other side, because any compromise you make is taking the country one step closer to that evil. The enemy must be fought tooth and nail, because his goals are so heinous.

... We tend to assume the worst of those we're arguing with; that he's ignoring this critical point, or that he understands what we're saying but is being deliberately obtuse. So we end up getting frustrated, saying something nasty, and cutting off any opportunity for real dialogue.
I don't know that we're a majority, as Olmsted hopes, but there's more than just a few of us, at least...
Posted by Mark on November 21, 2004 at 03:29 PM .: link :.


End of This Day's Posts

Thursday, November 11, 2004

Arranging Interests in Parallel
I have noticed a tendency on my part to, on occasion, quote a piece of fiction, and then comment on some wisdom or truth contained therein. This sort of thing is typically frowned upon in rigorous debate as fiction is, by definition, contrived and thus referencing it in a serious argument is rightly seen as undesirable. Fortunately for me, this blog, though often taking a serious tone, is ultimately an exercise in thinking for myself. The point is to have fun. This is why I will sometimes quote fiction to make a point, and it's also why I enjoy questionable exercises like speculating about historical figures. As I mentioned in a post on Benjamin Franklin, such exercises usually end up saying more about me and my assumptions than anything else. But it's my blog, so that is more or less appropriate.

Astute readers must at this point be expecting to recieve a citation from a piece of fiction, followed by an application of the relevant concepts to some ends. And they would be correct.

Early on in Neal Stephenson's novel The System of the World, Daniel Waterhouse reflects on what is required of someone in his position:
He was at an age where it was never possible ot pursue one errand at a time. He must do many at once. He guessed that people who had lived right and arranged things properly must have it all rigged so that all of their quests ran in parallel, and reinforced and supported one another just so. They gained reputations as conjurors. Others found their errands running at cross purposes and were never able to do anything; they ended up seeming mad, or else percieived the futility of what they were doing and gave up, or turned to drink.
Naturally, I believe there is some truth to this. In fact, the life of Benjamin Franklin, a historical figure from approximately the same time period as Dr. Waterhouse, provides us with a more tangible reference point.

Franklin was known to mix private interests with public ones, and to leverage both to further his business interests. The consummate example of Franklin's proclivities was the Junto, a club of young workingmen formed by Franklin in the fall of 1727. The Junto was a small club composed of enterprising tradesman and artisans who discussed issues of the day and also endeavored to form a vehicle for the furtherance of their own careers. The enterprise was typical of Franklin, who was always eager to form associations for mutual benefit, and who aligned his interests so they ran in parallel, reinforcing and supporting one another.

A more specific example of Franklin's knack for aligning interests is when he produced the first recorded abortion debate in America. At the time, Franklin was running a print shop in Philadelphia. His main competitor, Andrew Bradford, published the town's only newspaper. The paper was meager, but very profitable in both moneys and prestige (which led him to be more respected by merchants and politicians, and thus more likely to get printing jobs), and Franklin decided to launch a competing newspaper. Unfortunately, another rival printer, Samuel Keimer, caught wind of Franklin's plan and immediately launched a hastily assembled newspaper of his own. Franklin, realizing that it would be difficult to launch a third paper right away, vowed to crush Keimer:
In a comptetitive bank shot, Franklin decided to write a series of anonymous letters and essays, along the lines of the Silence Dogood pieces of his youth, for Bradford's [American Weekly Mercury] to draw attention away from Keimer's new paper. The goal was to enliven, at least until Keimer was beaten, Bradford's dull paper, which in its ten years had never puplished any such features.

The first two pieces were attacks on poor Keimer, who was serializing entries from an encyclopedia. His intial installment included, innocently enough, an entry on abortion. Franklin pounced. Using the pen names "Martha Careful" and "Celia Shortface," he wrote letters to Bradford's paper feigning shock and indignation at Keimer's offense. As Miss Careful threatened, "If he proceeds farther to expose the secrets of our sex in that audacious manner [women would] run the hazard of taking him by the beard in the next place we meet him." Thus Franklin manufactured the first recorded abortion debate in America, not because he had any strong feelings on the issue, but because he knew it would sell newspapers. [This is an exerpt from the recent biography Benjamin Franklin: An American Life by Walter Isaacson]
Franklin's many actions of the time certainly weren't running at cross purposes, and he did manage to align his interests in parallel. He truly was a master, and we'll be hearing more about him on this blog soon.

This isn't the first time I've written about this subject before either. In a previous post, On the Overloading of Information, I noted one of the main reasons why blogging continues to be an enjoyable activity for me, despite changing interests and desires:
I am often overwhelmed by a desire to consume various things - books, movies, music, etc... The subject of such things is also varied and, as such, often don't mix very well. That said, the only thing I have really found that works is to align those subjects that do mix in such a way that they overlap. This is perhaps the only reason blogging has stayed on my plate for so long: since the medium is so free-form and since I have absolute control over what I write here and when I write it, it is easy to align my interests in such a way that they overlap with my blog (i.e. I write about what interests me at the time).
One way you can tell that my interests have shifted over the years is that the format and content of my writing here has also changed. I am once again reminded of Neal Stephenson's original minimalist homepage in which he speaks of his ongoing struggle against what Linda Stone termed as "continuous partial attention," as that curious feature of modern life only makes the necessity of aligning interests in parallel that much more important.

Aligning blogging with my other core interests, such as reading fiction, is one of the reasons I frequently quote fiction, even in reference to a serious topic. Yes, such a practice is frowned upon, but blogging is a hobby, the idea of which is to have fun. Indeed, Glenn Reynolds, progenitor of one of the most popular blogging sites around, also claims to blog for fun, and interestingly enough, he has quoted fiction in support of his own serious interests as well (more than once). One other interesting observation is that all references to fiction in this post, including even Reynolds' references, are from Neal Stephenson's novels. I'll leave it as an exercise for the reader to figure out what significance, if any, that holds.
Posted by Mark on November 11, 2004 at 11:45 PM .: link :.


End of This Day's Posts

Sunday, November 07, 2004

Open Source Security
A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. In a follow up post, I examined how this concept could be applied to a broader range of information dissemination processes. That post focused on computer security and how full disclosure of system vulnerabilities actually improves security in the long run. Ironically, public scrutiny is the only reliable way to improve security.

Full disclosure is certainly not perfect. By definition, it increases risk in the short term, which is why opponents are able to make persuasive arguments against it. Like all security, it is a matter of tradeoffs. Does the long term gain justify the short term risk? As I'm fond of saying, human beings don't so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn't quite as bad as the old). There is no solution here, only a less disadvantaged system.

Now I'd like to broaden the subject even further, and apply the concept of open security to national security. With respect to national security, the stakes are higher and thus the argument will be more difficult to sustain. If people are unwilling to deal with a few computer viruses in the short term in order to increase long term security, imagine how unwilling they'll be to risk a terrorist attack, even if that risk ultimately closes a few security holes. This may be prudent, and it is quite possible that a secrecy approach is more necessary at the national security level. Secrecy is certainly a key component of intelligence and other similar aspects of national security, so open security techniques would definitely not be a good idea in those areas.

However, there are certain vulnerabilities in processes and systems we use that could perhaps benefit from open security. John Robb has been doing some excellent work describing how terrorists (or global guerillas, as he calls them) can organize a more effective campaign in Iraq. He postulates a Bazaar of violence, which takes its lessons from the open source programming community (using Eric Raymond's essay The Cathedral and the Bazaar as a starting point):
The decentralized, and seemingly chaotic guerrilla war in Iraq demonstrates a pattern that will likely serve as a model for next generation terrorists. This pattern shows a level of learning, activity, and success similar to what we see in the open source software community. I call this pattern the bazaar. The bazaar solves the problem: how do small, potentially antagonistic networks combine to conduct war?
Not only does the bazaar solve the problem, it appears able to scale to disrupt larger, more stable targets. The bazaar essentially represents the evolution of terrorism as a technique into something more effective: a highly decentralized strategy that is nevertheless able to learn and innovate. Unlike traditional terrorism, it seeks to leverage gains from sabotaging infrastructure and disrupting markets. By focusing on such targets, the bazaar does not experience diminishing returns in the same way that traditional terrorism does. Once established, it creats a dynamic that is very difficult to disrupt.

I'm a little unclear as to what the purpose of the bazaar is - the goal appears to be a state of perpetual violence that is capable of keeping a nation in a position of failure/collapse. That our enemies seek to use this strategy in Iraq is obvious, but success essentially means perpetual failure. What I'm unclear on is how they seek to parlay this result into a successful state (which I assume is their long term goal - perhaps that is not a wise assumption).

In any case, reading about the bazaar can be pretty scary, especially when news from Iraq seems to correllate well with the strategy. Of course, not every attack in Iraq correllates, but this strategy is supposedly new and relatively dynamic. It is constantly improving on itself. They are improvising new tactics and learning from them in an effort to further define this new method of warfare.

As one of the commenters on his site notes, it is tempting to claim that John Robb's analysis is essentially an instruction manual for a guerilla organization, but that misses the point. It's better to know where we are vulnerable before we discover that some weakness is being exploited.

One thing that Robb is a little short on is actual, concrete ways with which to fight the bazaar (there are some, and he has pointed out situations where U.S. forces attempted to thwart bazaar tactics, but such examples are not frequent). However, he still provides a valuable service in exposing security vulnerabilities. It seems appropriate that we adopt open source security techniques in order to fight an enemy that employs an open source platform. Vulnerabilities need to be exposed so that we may devise effective counter-measures.
Posted by Mark on November 07, 2004 at 08:56 PM .: link :.


End of This Day's Posts

Sunday, October 10, 2004

Open Security and Full Disclosure
A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I felt that the media could learn from such a model. Interestingly enough, such concepts can be applied to wider scenarios concerning information dissemination, particularly security.

Bruce Schneier has often written about such issues, and most of the information that follows is summarized from several of his articles, recent and old. The question with respect to computer security systems is this: Is publishing computer and network or software vulnerability information a good idea, or does it just help attackers?

When such a vulnerability exists, it creates what Schneier calls a Window of Exposure in which the vulnerability can still be exploited. This window exists until the vulnerability is patched and installed. There are five key phases which define the size of the window:
Phase 1 is before the vulnerability is discovered. The vulnerability exists, but no one can exploit it. Phase 2 is after the vulnerability is discovered, but before it is announced. At that point only a few people know about the vulnerability, but no one knows to defend against it. Depending on who knows what, this could either be an enormous risk or no risk at all. During this phase, news about the vulnerability spreads -- either slowly, quickly, or not at all -- depending on who discovered the vulnerability. Of course, multiple people can make the same discovery at different times, so this can get very complicated.

Phase 3 is after the vulnerability is announced. Maybe the announcement is made by the person who discovered the vulnerability in Phase 2, or maybe it is made by someone else who independently discovered the vulnerability later. At that point more people learn about the vulnerability, and the risk increases. In Phase 4, an automatic attack tool to exploit the vulnerability is published. Now the number of people who can exploit the vulnerability grows exponentially. Finally, the vendor issues a patch that closes the vulnerability, starting Phase 5. As people install the patch and re-secure their systems, the risk of exploit shrinks. Some people never install the patch, so there is always some risk. But it decays over time as systems are naturally upgraded.
The goal is to minimize the impact of the vulnerability by reducing the window of exposure (the area under the curve in figure 1). There are two basic approaches: secrecy and full disclosure.

The secrecy approach seeks to reduce the window of exposure by limiting public access to vulnerability information. In a different essay about network outages, Schneier gives a good summary of why secrecy doesn't work well:
The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they're lost they're lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there's no way to recover security. Trying to base security on secrecy is just plain bad design.

... Secrecy prevents people from assessing their own risks.
Secrecy may work on paper, but in practice, keeping vulnerabilities secret removes motivation to fix the problem (it is possible that a company could utilize secrecy well, but it is unlikely that all companies would do so and it would be foolish to rely on such competency). The other method of reducing the window of exposure is to disclose all information about the vulnerablity publicly. Full Disclosure, as this method is called, seems counterintuitive, but Schneier explains:
Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn't bother fixing them, believing in the security of secrecy.
Ironically, publishing details about vulnerabilities leads to a more secure system. Of course, this isn't perfect. Obviously publishing vulnerabilities constitutes a short term danger, and can sometimes do more harm than good. But the alternative, secrecy, is worse. As Schneier is fond of saying, security is about tradeoffs. As I'm fond of saying, human beings don't so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn't quite as bad as the old). There is no solution here, only a less disadvantaged system.

This is what makes advocating open security systems like full disclosure difficult. Opponents will always be able to point to its flaws, and secrecy advocates are good at exploiting the intuitive (but not necessarily correct) nature of their systems. Open security systems are just counter-intuitive, and there is a tendency to not want to increase risk in the short term (as things like full disclosure does). Unfortunately, that means that the long term danger increases, as there is less incentive to fix security problems.

By the way, Schneier has started a blog. It appears to be made up of the same content that he normally releases monthly in the Crypto-Gram newsletter, but spread out over time. I think it will be interesting to see if Schneier starts responding to events in a more timely fashion, as that is one of the keys to the success of blogs (and it's something that I'm bad at, unless news breaks on a Sunday).
Posted by Mark on October 10, 2004 at 11:56 AM .: link :.


End of This Day's Posts

Sunday, October 03, 2004

Monkey Research Squad Strikes Again
My crack squad of monkey researchers comes through again with a few interesting links:
  • Blogs, Media, and Business Strategy: David Foster draws parallels between business strategy, media bias, and blogs:
    The authors argue that disruptive innovations--those destined to change the structure of an industry--tend to attack from below. They usually first appear in a form that is in some ways inferior to the existing dominant technologies, and hence are unlikely to get the attention or respect of industry incumbents. They provide examples in industries ranging from steel to semiconductors. In steel, for instance, the challenger technology was "mini-mills" using electric arc furnaces to melt scrap. At first, the steel produced in these mills wasn't as good as the steel produced with the incumbent technology, the gigantic integrated steel plants, so they focused on an unglamorous and relatively low-margin market: reinforcing bar (rebar). Big-steel executives could afford to disregard the mini-mills and to focus on higher-end business.

    I would bet that the comments made by some big-steel execs about their mini-mill counterparts were quite similar in tone to the comment recently made by a CBS exec about bloggers in their pajamas.
  • Andy Cline and Jay Manifold announce a new joint venture called 411blog.net, a resource whereby "a symbiotic relationship between blogging and traditional forms of journalism can be deliberately cultivated."
  • Belmont Club has some excellent information regarding how the process of mapping social networks and understanding their properties can be used to take down terrorist networks.
  • Kevin Murphy notes the surprising similarities between musicals and action movies.
Posted by Mark on October 03, 2004 at 02:44 PM .: link :.


End of This Day's Posts

Wednesday, September 15, 2004

A Reflexive Media
"To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!" - Anne Murrow Lindbergh
There are many types of documentary films. The most common form of documentary is referred to as Direct Address (aka Voice of God). In such a documentary, the viewer is directly acknowledged, usually through narration and voice-overs. There is very little ambiguity and it is pretty obvious how you're expected to interpret these types of films. Many television and news programs use this style, to varying degrees of success. Ken Burns' infamous Civil War and Baseball series use this format eloquently, but most traditional propaganda films also fall into this category (a small caveat: most films are hybrids, rarely falling exclusively into one category). Such films give the illusion of being an invisible witness to certain events and are thus very persuasive and powerful.

The problem with Direct Address documentaries is that they grew out of a belief that Truth is knowable through objective facts. In a recent sermon he posted on the web, Donald Sensing spoke of the difference between facts and the Truth:
Truth and fact are not the same thing. We need only observe the presidential race to discern that. John Kerry and allies say that the results of America's war against Iraq is mostly a failure while George Bush and allies say they are mostly success. Both sides have the same facts, but both arrive at a different "truth."

People rarely fight over facts. What they argue about is what the facts mean, what is the Truth the facts indicate.
I'm not sure Sensing chose the best example here, but the concept itself is sound. Any documentary is biased in the Truth that it presents, even if the facts are undisputed. In a sense objectivity is impossible, which is why documentary scholar Bill Nichols admires films which seek to contextualize themselves, exposing their limitations and biases to the audience.

Reflexive Documentaries use many devices to acknowledge the filmmaker's presence, perspective, and selectivity in constructing the film. It is thought that films like this are much more honest about their subjectivity, and thus provide a much greater service to the audience.

An excellent example of a Reflexive documentary is Errol Morris' brilliant film, The Thin Blue Line. The film examines the "truth" around the murder of a Dallas policeman. The use of colored lighting throughout the film eventually correlates with who is innocent or guilty, and Morris is also quite manipulative through his use of editing - deconstructing and reconstructing the case to demonstrate just how problematic finding the truth can be. His use of framing calls attention to itself, daring the audience to question the intents of the filmmakers. The use of interviews in conjunction with editing is carefully structured to demonstrate the subjectivity of the film and its subjects. As you watch the movie, it becomes quite clear that Morris is toying with you, the viewer, and that he wants you to be critical of the "truth" he is presenting.

Ironically, a documentary becomes more objective when it acknowledges its own biases and agenda. In other words, a documentary becomes more objective when it admits its own subjectivity. There are many other forms of documentary not covered here (i.e. direct cinema/cinema verité, interview-based, performative, mock-documentaries, etc... most of which mesh together as they did in Morris' Blue Line to form a hybrid).

In Bill Nichols' seminal essay, Voice of Documentary (Can't seem to find a version online), he says:
"Documentary filmmakers have a responsibility not to be objective. Objectivity is a concept borrowed from the natural sciences and from journalism, with little place in the social sciences or documentary film."
I always found it funny that Nichols equates the natural sciences with journalism, as it seems to me that modern journalism is much more like a documentary than a natural science. As such, I think the lessons of Reflexive documentaries (and its counterparts) should apply to the realm of journalism.

The media emphatically does not acknowledge their biases. By bias, I don't mean anything as short-sighted as liberal or conservative media bias, I mean structural bias of which political orientation is but a small part (that link contains an excellent essay on the nature of media bias, one that I find presents a more complete picture and is much more useful than the tired old ideological bias we always hear so much about*). Such subjectivity does exist in journalism, yet the media stubbornly persists in their firm belief that they are presenting the objective truth.

The recent CBS scandal, consisting of a story bolstered by what appear to be obviously forged documents, provides us with an immediate example. Terry Teachout makes this observation regarding how few prominent people are willing to admit that they are wrong:
I was thinking today about how so few public figures are willing to admit (for attribution, anyway) that they’ve done something wrong, no matter how minor. But I wasn’t thinking of politicians, or even of Dan Rather. A half-remembered quote had flashed unexpectedly through my mind, and thirty seconds’ worth of Web surfing produced this paragraph from an editorial in a magazine called World War II:
Soon after he had completed his epic 140-mile march with his staff from Wuntho, Burma, to safety in India, an unhappy Lieutenant General Joseph W. Stilwell was asked by a reporter to explain the performance of Allied armies in Burma and give his impressions of the recently concluded campaign. Never one to mince words, the peppery general responded: "I claim we took a hell of a beating. We got run out of Burma and it is as humiliating as hell. I think we ought to find out what caused it, and go back and retake it."
Stilwell spoke those words sixty-two years ago. When was the last time that such candor was heard in like circumstances? What would happen today if similar words were spoken by some equally well-known person who’d stepped in it up to his eyebrows?
As he points out later in his post, I don't think we're going to be seeing such admissions any time soon. Again, CBS provides a good example. Rather than admit the possibility that they may be wrong, their response to the criticisms of their sources has been vague, dismissive, and entirely reliant on their reputation as a trustworthy staple of journalism. They have not yet comprehensively responded to any of the numerous questions about the documents; questions which range from "conflicting military terminology to different word-processing techniques". It appears their strategy is to escape the kill zone by focusing on the "truth" of their story, that Bush's service in the Air National Guard was less than satisfactory. They won't admit that the documents are forgeries, and by focusing on the arguably important story, they seek to distract the issue away from their any discussion of their own wrongdoing - in effect claiming that the documents aren't important because the story is "true" anyway.

Should they admit they were wrong? Of course they should, but they probably won't. If they won't, it will not be because they think the story is right, and not because they think the documents are genuine. They won't admit wrongdoing and they won't correct their methodologies or policies because to do so would be to acknowledge to the public that they are less than just an objective purveyor of truth.

Yet I would argue that they should do so, that it is their duty to do so just as it is the documentarian's responsibility to acknowledge their limitations and agenda to their audience.

It is also interesting to note that weblogs contrast the media by doing just that. Glenn Reynolds notes that the internet is a low-trust medium, which paradoxically indicates that it is more trustworthy than the media (because blogs and the like acknowledge their bias and agenda, admit when they're wrong, and correct their mistakes):
The Internet, on the other hand, is a low-trust environment. Ironically, that probably makes it more trustworthy.

That's because, while arguments from authority are hard on the Internet, substantiating arguments is easy, thanks to the miracle of hyperlinks. And, where things aren't linkable, you can post actual images. You can spell out your thinking, and you can back it up with lots of facts, which people then (thanks to Google, et al.) find it easy to check. And the links mean that you can do that without cluttering up your narrative too much, usually, something that's impossible on TV and nearly so in a newspaper.

(This is actually a lot like the world lawyers live in -- nobody trusts us enough to take our word for, well, much of anything, so we back things up with lots of footnotes, citations, and exhibits. Legal citation systems are even like a primitive form of hypertext, really, one that's been around for six or eight hundred years. But I digress -- except that this perhaps explains why so many lawyers take naturally to blogging).

You can also refine your arguments, updating -- and even abandoning them -- in realtime as new facts or arguments appear. It's part of the deal.

This also means admitting when you're wrong. And that's another difference. When you're a blogger, you present ideas and arguments, and see how they do. You have a reputation, and it matters, but the reputation is for playing it straight with the facts you present, not necessarily the conclusions you reach.
The mainstream media as we know it is on the decline. They will no longer be able to get by on their brand or their reputations alone. The collective intelligence of the internet, combined with the natural reflexiveness of its environment, has already provided a challenge to the underpinnings of journalism. On the internet, the dominance of the media is constantly challenged by individuals who question the "truth" presented to them in the media. I do not think that blogs have the power to eclipse the media, but their influence is unmistakable. The only question that remains is if the media will rise to the challenge. If the way CBS has reacted is any indication, then, sadly, we still have a long way to go.

* Yes, I do realize the irony of posting this just after I posted about liberal and conservative tendencies in online debating, and I hinted at that with my "Update" in that post.

Thanks to Jay Manifold for the excellent Structural Bias of Journalism link.
Posted by Mark on September 15, 2004 at 11:07 PM .: link :.


End of This Day's Posts

Thursday, September 09, 2004

Benjamin Franklin: American, Blogger & LIAR!
I've been reading a biography of Benjamin Franklin (Benjamin Franklin: An American Life by Walter Isaacson), and several things have struck me about the way in which he conducted himself. As with a lot of historical figures, there is a certain aura that surrounds the man which is seen as impenetrable today, but it's interesting to read about how he was perceived in his time and contrast that with how he would be perceived today. As usual, there is a certain limit to the usefulness of such speculation, as it necessarily must be based on certain assumptions that may or may not be true (as such this post might end up saying more about me and my assumptions than Franklin!). In any case, I find such exercises interesting, so I'd like to make a few observations.

The first is that he would have probably made a spectacular blogger, if he chose to engage in such an activity (Ken thinks he would definitely be a blogger, but I'm not so sure). He not only has all the makings of a wonderful blogger, I think he'd be extremely creative with the format. He was something of a populist, his writing was humorous, self-deprecating, and often quite profound at the same time. His range of knowledge and interest was wide, and his tone was often quite congenial. All qualities valued in any blogger.

He was incredibly prolific (another necessity for a successful blog), and often wrote the letters to his paper himself under assumed names, and structured them in such a way as to gently deride his competitors while making some other interesting point. For instance, Franklin once published two letters, written under two different pseudonyms, in which he manufactured the first recorded abortion debate in America - not because of any strong feelings on the issue, but because he knew it would sell newspapers and because his competitor was serializing entries from an encyclopedia at the time and had started with "Abortion." Thus the two letters were not only interesting in themselves, but also provided ample opportunity to impugn his competitor.

On thing I think we'd see in a Franklin blog is entire comment threads consisting of a full back-and-forth debate, with all entries written by Franklin himself under assumed names. I can imagine him working around other "real" commenters with his own pseudonyms, and otherwise having fun with the format (he'd almost certainly make a spectacular troll as well).

If there was ever a man who could make a living out of blogging, I think Franklin was it. This is, in part, why I'm not sure he'd truly end up as a pure blogger, as even in his day, Franklin was known to mix private interests with public ones, and to leverage both to further his business interests. He could certainly have organized something akin to The Junto on the internet, where a group of likeminded fellows got together (whether it be physically or virtually over the internet) and discussed issues of the day and also endeavored to form a vehicle for the furtherance of their own careers.

Then again, perhaps Franklin would simply have started his own newspaper and had nothing to do with blogging (or perhaps he would attempt to mix the two in some new way). The only problem would be that the types of satire and hoaxes he could get away with in his newspapers in the early 18th century would not really be possible in today's atmosphere (such playfulness has long ago left the medium, but is alive and well in the blogosphere, which is one thing that would tend to favor his participation).

Which brings me to my next point: I have to wonder how Franklin would have done in today's political climate. Would he have been able to achieve political prominence? Would he want to? Would his anonymous letters, hoaxes, and in his newspapers have gotten him into trouble? I can imagine the self-righteous indignation now: "His newspaper is a farce! He's a LIAR!" And the Junto? I don't even want to think of the conspiracy theories that could be conjured with that sort of thing in mind.

One thing Franklin was exceptionally good at was managing his personal image, but would he be able to do so in today's atmosphere? I suspect he would have done well in our time, but I don't know how politically active he would be (and I suppose there is something to be said about his participation being partly influenced by the fact that he was a part of a revolution, not a true politician of the kind we have today). I know the basic story of his life, but I haven't gotten that far in the book, so perhaps I should revisit this subject later. And thus ends my probably inaccurate, but interesting nonetheless, discussion of Franklin in our times. Expect more references to Franklin in the future, as I have been struck by quite a few things about his life that are worth discussing today.
Posted by Mark on September 09, 2004 at 10:00 PM .: link :.


End of This Day's Posts

Sunday, August 22, 2004

Functional Chauvinism
Respecting Other Talents: David Foster writes about the dangers of "Functional Chauvinism":
A person typically spends his early career in a particular function, and interacts mainly with others in that function. And there is often an unwholesome kind of functional "patriotism" which goes beyond pride in one's own work and disparages the work done by others ("we could get this software written if those marketing idiots would just stop bothering us.")
An excellent post (and typical of the work over at Photon Courier), Foster focuses on the impacts to the business world, but I remember this sort of thing being prevalent in college. I was an engineer, but I naturally had to take on a number of humanities courses in addition to my technical workload. Functional chauvinism came from both students and professors, but the people who stood out were those who avoided this pitfall and made an effort to understand and respect functional differences.

For instance, many of my fellow engineering students were pissed that they even had to take said humanities courses. After all, they were paying an exorbanent amount of money to be educated in advanced technical issues, not what some Greek guy thought 2 millennia ago (personally, I found those subjects interesting and appreciated the chance for an easy A - ok, there's a bit of chauvinism there too, but I at least respected the general idea that humanities were important).

On the other hand, there were professors who were so absorbed in their area of study that they could not conceive of a student not being spellbound and utterly fascinated by whatever they taught. For someone majoring in philosophy, that's fine, but for an engineer who considers their technical courses to be their priority, it becomes a little different. I got lucky, in that several of professors actually took into account what major their students were. Often a class would be filled with engineers or hard science majors, and these classes were made more relevant and rewarding because the professors took that into account when teaching. Other professors were not so considerate.

It is certainly understandable to have such feelings, and to a point there's no real harm done, but it can't hurt to take a closer look at what other people do either. As Foster concludes, "Respect for talents other than one's own. A key element of individual and organizational success." Indeed.
Posted by Mark on August 22, 2004 at 03:37 PM .: link :.


Miscellany
Checking in with my chain smoking monkey research staff, here are a few interesting links they've dug up:
  • Baghdad Journal, Part 13: You knew this was coming - yet another in the long series of articles by artist Steve Mumford about Iraq on the ground. This one focuses a little on the law enforcement situation, including the training of Iraqi police and National Guard, and prisons. Interesting stuff, as always. If you are not familiar with Mumford's work, I suggest you take a look at all of his previous columns. Highly recommended.
  • Alien vs. Predator: Something about this list where Aliens and Predators compete in unlikely events like breakdancing (which logically goes to Alien) and Macram� (wich is totally a Predator dominated event) just feels right. Perhaps it's because of the Olympics.
  • Cinematic Supervillain Showdown: Along the same lines, inspired by AvP, Matthew Baldwin made up a list pitting other cinematic villains against one another, March Madness style. Funniest matchups: Sauron vs. Ferris Bueller's Principal and Hannibal Lecter vs. Stay-Puft Marshmallow Man. Kottke has a few interesting matchup ideas as well. As with any such list, there are notable absences, and I won't stoop to the level of feigning shocked disappointment that one of my favorites isn't included... Allright, I lied, where the hell is Boba Fett?!
  • Grand List of Overused Science Fiction Clich�s: This list of overused plot-points has a two pronged effect: Makes it that much more difficult to write a story, and it makes you genuinely appreciate when a story such as, perhaps, this one, whose concept is certainly overused, but studiously avoids any clich�s. [via The Ministry of Minor Perfidy]
  • 2004 Olympics: Speaking of those perfidious folks, Johno has a great post about the Olympics:
    Olympic badminton is scary. That wussy little shuttlecock and flimsy little racquet in the hands of experts become weapons of fearsome power. Last night in a doubles match I watched a short little American guy with a 35-inch (!) vertical leap whip off a kill that must have been going 85 MPH when the shuttlecock hit the court. Unbelievable. More unbelievable is that they got taken apart by a Norwegian team who played like implacable machines.
    Very true. And yes, those lady gymnasts do "need to eat some cake." I'd also like to mention how astounding their routines are. I never really paid much attention to it before, but I now realize that I'm not sure I can even walk across the balance beam and that these people are truly amazing individuals.
That's all for now. More later.
Posted by Mark on August 22, 2004 at 02:32 PM .: link :.


End of This Day's Posts

Sunday, July 18, 2004

With great freedom, comes great responsibility...
David Foster recently wrote about a letter to the New York Times which echoed sentiments regarding Iraq that appear to be commonplace in certain circles:
While we have removed a murderous dictator, we have left the Iraqi people with a whole new set of problems they never had to face before...
I've often written about the tradeoffs inherent in solving problems, and the invasion of Iraq is no exception. Let us pretend for a moment that everything that happened in Iraq over the last year went exactly as planned. Even in that best case scenario, the Iraqis would be facing "a whole new set of problems they never had to face before." There was no action that could have been taken regarding Iraq (and this includes inaction) that would have resulted in an ideal situation. We weren't really seeking to solve the problems of Iraq, so much as we were exchanging one set of problems for another.

Yes, the Iraqis are facing new problems they have never had to face before, but the point is that the new problems are more favorable than the old problems. The biggest problem they are facing is, in short, freedom. Freedom is an odd thing, and right now, halfway across the world, the Iraqis are finding that out for themselves. Freedom brings great benefits, but also great responsibility. Freedom allows you to express yourself without fear of retribution, but it also allows those you hate to express things that make your blood boil. Freedom means you have to acknowledge their views, no matter how repulsive or disgusting you may find them (there are limits, of course, but that is another subject). That isn't easy.

A little while ago, Steven Den Beste wrote about Jewish immigrants from the Soviet Union:
About 1980 (I don't remember exactly) there was a period in which the USSR permitted huge numbers of Jews to leave and move to Israel. A lot of them got off the jet in Tel Aviv and instantly boarded another one bound for New York, and ended up here.

For most of them, our society was quite a shock. They were free; they were out of the cage. But with freedom came responsibility. The State didn't tell them what to do, but the State also didn't look out for them.

The State didn't prevent them from doing what they wanted, but the State also didn't prevent them from screwing up royally. One of the freedoms they discovered they had was the freedom to starve.
There are a lot of people who ended up in the U.S. because they were fleeing oppression, and when they got here, they were confronted with "a whole new set of problems they never had to face before." Most of them were able to adapt to the challenges of freedom and prosper, but don't confuse prosperity with utopia. These people did not solve their problems, they traded them for a set of new problems. For most of them, the problems associated with freedom were more favorable than the problems they were trying to escape from. For some, the adjustment just wasn't possible, and they returned to their homes.

Defecting North Koreans face a host of challenges upon their arrival in South Korea (if they can make it that far), including the standard freedom related problems: "In North Korea, the state allocates everything from food to jobs. Here, having to do their own shopping, banking or even eating at a food court can be a trying experience." The differences between North Korea and South Korea are so vast that many defectors cannot adapt, despite generous financial aid, job training and other assistance from civic and religious groups. Only about half of the defectors are able to wrangle jobs, but even then, it's hard to say that they've prospered. But at the same time, are their difficulties now worse than their previous difficulties? Moon Hee, a defector who is having difficulties adjusting, comments: "The present, while difficult, is still better than the past when I did not even know if there would be food for my next meal."

There is something almost paradoxical about freedom. You see, it isn't free. Yes, freedom brings benefits, but you must pay the price. If you want to live in a free country, you have to put up with everyone else being free too, and that's harder than it sounds. In a sense, we aren't really free, because the freedom we live with and aspire to is a limiting force.

On the subject of Heaven, Saint Augustine once wrote:
The souls in bliss will still possess the freedom of will, though sin will have no power to tempt them. They will be more free than ever–so free, in fact, from all delight in sinning as to find, in not sinning, an unfailing source of joy. ...in eternity, freedom is that more potent freedom which makes all sin impossible. - Saint Augustine, City of God (Book XXII, Chapter 30)
Augustine's concept of a totally free will is seemingly contradictory. For him, freedom, True Freedom, is doing the right thing all the time (I'm vastly simplifying here, but you get the point). Outside of Heaven, however, doing the right thing, as we all know, isn't easy. Just ask Spider-Man.

I never really read the comics, but in the movies (which appear to be true to their source material) Spider-Man is all about the conflict between responsibilities and desires. Matthew Yglesias is actually upset with the second film because is has a happy ending:
Being the good guy -- doing the right thing -- really sucks, because doing the right thing doesn't just mean avoiding wrongdoing, it means taking affirmative action to prevent it. There's no time left for Peter's life, and his life is miserable. Virtue is not its own reward, it's virtue, the rewards go to the less consciencious. There's no implication that it's all worthwhile because God will make it right in the End Times, the life of the good guy is a bleak one. It's an interesting (and, I think, a correct) view and it's certainly one that deserves a skilled dramatization, which is what the film gives you right up until the very end. But then -- ta da! -- it turns out that everyone does get to be happy after all. A huge letdown.
Of course, plenty of people have noted that the Spider-Man story doesn't end with the second movie, and that the third is bound to be filled with the complications of superhero dating (which are not limited to Spider-Man).

Spider-Man grapples with who he is. He has gained all sorts of powers, and with those powers, he has also gained a certain freedom. It could be very liberating, but as the saying goes: With great power comes great responsibility. He is not obligated to use his powers for good or at all, but he does. However, for a good portion of the second film he shirks his duties because a life of pure duty has totally ruined his personal life. This is that conflict between responsibilities and desires I mentioned earlier. It turns out that there are limits to Spider-Man's altruism.

For Spider-Man, it is all about tradeoffs, though he may have learned it the hard way. First he took on too much responsibility, and then too little. Will he ever strike a delicate balance? Will we? For we are all, in a manner of speaking, Spider-Man. We all grapple with similar conflicts, though they manifest in our lives with somewhat less drama. Balancing your personal life with your professional life isn't as exciting, but it can be quite challenging for some.

And so the people of Iraq are facing new challenges; problems they have never had to face before. Like Spider-Man, they're going to have to deal with their newfound responsibilites and find a way to balance them with their desires. Freedom isn't easy, and if they really want it, they'll need to do more than just avoid problems, they'll have to actively solve them. Or, rather, trade one set of problems for another. Because with great freedom, comes great responsibility.
Posted by Mark on July 18, 2004 at 09:16 PM .: link :.


End of This Day's Posts

Sunday, June 13, 2004

A Specific Culture
In thinking of the issues discussed in my last post, I remembered this Neal Stephenson quote from In the Beginning Was the Command Line:
The only real problem is that anyone who has no culture, other than this global monoculture, is completely screwed. Anyone who grows up watching TV, never sees any religion or philosophy, is raised in an atmosphere of moral relativism, learns about civics from watching bimbo eruptions on network TV news, and attends a university where postmodernists vie to outdo each other in demolishing traditional notions of truth and quality, is going to come out into the world as one pretty feckless human being. And--again--perhaps the goal of all this is to make us feckless so we won't nuke each other.

On the other hand, if you are raised within some specific culture, you end up with a basic set of tools that you can use to think about and understand the world. You might use those tools to reject the culture you were raised in, but at least you've got some tools.
[emphasis added] It is true that one of the things that religion gives us is a specific way of looking at and understanding the world. Further, it gives people a certain sense of belonging that is so important to us as social beings. Even if someone ends up rejecting the tenets of their faith, they have benefitted from the sense of community and gained a certain way of looking at the world that won't entirely go away.
Posted by Mark on June 13, 2004 at 09:32 PM .: link :.


End of This Day's Posts

Friday, June 11, 2004

Religion isn't as comforting as it seems
Steven Den Beste is an athiest, yet he is unlike any athiest I have ever met in that he seems to understand theists (in the general sense of the term) and doesn't hold their beliefs against them. As such, I have gained an immense amount of respect for him and his beliefs. He speaks with conviction about his beliefs, but he is not evangelistic.

In his latest post, he aks one of the great unanswerable questions: What am I? I won't pretend to have any of the answers, but I do object to one thing he said. It is a belief that is common among athiests (though theists are little better):
Is a virus alive? I don't know. Is a hive mind intelligent? I don't know. Is there actually an identifiable self with continuity of existence which is typing these words? I really don't know. How much would that self have to change before we decide that the continuity has been disrupted? I think I don't want to find out.

Most of those kinds of questions either become moot or are easily answered within the context of standard religions. Those questions are uniquely troubling only for those of us who believe that life and intelligence are emergent properties of certain blobs of mass which are built in certain ways and which operate in certain kinds of environments. We might be forced to accept that identity is just as mythical as the soul. We might be deluding ourselves into thinking that identity is real because we want it to be true.
[Emphasis added] The idea that these types of unanswerable questions is not troubling or easy to answer to a believer is a common one, but I also believe it to be false. Religion is no more comforting than any other system of beliefs, including athiesm. Religion does provide a vocabulary for the unanswerable, but all that does is help us grapple with the questions - it doesn't solve anything and I don't think it is any more comforting. I believe in God, but if you asked me what God really is, I wouldn't be able to give you a definitive answer. Actually, I might be able to do that, but "God is a mystery" is hardly comforting or all that useful.

Elsewhere in the essay, he refers to the Christian belief in the soul:
To a Christian, life and self are ultimately embodied in a person's soul. Death is when the soul separates from the body, and that which makes up the essence of a person is embodied in the soul (as it were).
He goes on to list some conundrums that would be troubling to the believer but they all touch on the most troubling thing - what the heck is the soul in the first place? Trying to answer that is no more comforting to a theist than trying to answer the questions he's asking himself. The only real difference is a matter of vocabulary. All religion has done is shifted the focus of the question.

Den Beste goes on to say that there are many ways in which atheism is cold and unreassuring, but fails to recognize the ways in which religion is cold and unreassuring. For instance, there is no satisfactory theodicy that I have ever seen, and I've spent a lot of time studying such things (16 years of Catholic schooling baby!) A theodicy is essentially an attempt to reconcile God's existance with the existance of evil. Why does God allow evil to exist? Again, there is no satisfactory answer to that question, not the least of which because there is no satisfactory definition of both God and evil!

Now, theists often view athiests in a similar manner. While Den Beste laments the cold and unreassuring aspects of athiesm, a believer almost sees the reverse. To some believers, if you remove God from the picture, you also remove all concept of morality and responsibility. Yet, that is not the case, and Den Beste provides an excellent example of a morally responsible athiest. The grass is greener on the other side, as they say.

All of this is generally speaking, of course. Not all religions are the same, and some are more restrictive and closed-minded than others. I suppose it can be a matter of degrees, with one religion or individual being more open minded than the other, but I don't really know of any objective way to measure that sort of thing. I know that there are some believers who aren't troubled by such questions and proclaim their beliefs in blind faith, but I don't count myself among them, nor do I think it is something that is inherent in religion (perhaps it is inherent in some religions, but even then, religion does not exist in a vacuum and must be reconciled with the rest of the world).

Part of my trouble with this may be that I seem to have the ability to switch mental models rather easily, viewing a problem from a number of different perspectives and attempting to figure out the best way to approach a problem. I seem to be able to reconcile my various perspectives with each other as well (for example, I seem to have no problem reconciling science and religion with each other), though the boundries are blurry and I can sometimes come up with contradictory conclusions. This is in itself somewhat troubling, but at the same time, it is also somwhat of an advantage that I can approach a problem in a number of different ways. The trick is knowing which approach to use for which problem; hardly an easy proposition. Furthermore, I gather that I am somewhat odd in this ability, at least among believers. I used to debate religion a lot on the internet, and after a time, many refused to think of me as a Catholic because I didn't seem to align with others' perception of what Catholics are. I always found that rather amusing, though I guess I can understand the sentiment.

Unlike Den Beste, I do harbor some doubt in my beliefs, mainly because I recognize them as beliefs. They are not facts and I must concede the idea that my beliefs are incorrect. Like all sets of beliefs, there is an aspect of my beliefs that is very troubling and uncomforting, and there is a price we all pay for believing what we believe. And yet, believe we must. If we required our beliefs to be facts in order to act, we would do nothing. The value we receive from our beliefs outweighs the price we pay, or so we hope...

I suppose this could be seen by Steven to be missing the forest for the trees, but the reason I posted it is because the issue of beliefs discussed above fits nicely with several recent posts I made under the guise of Superstition and Security Beliefs (and Heuristics). They might provide a little more detail on the way I think regarding these subjects.
Posted by Mark on June 11, 2004 at 12:09 AM .: link :.


End of This Day's Posts

Sunday, May 30, 2004

Heuristics of Combat
Otherwise known as Murphy's laws of Combat, most of which are derived from Murphy's more general law: "Anything that can go wrong, will go wrong." Soldiers often add to this what is called O'Neil's Law: "Murphy was an optimist."

War is, of course, a highly unstable and chaotic undertaking. Combat and preparation are beset on all sides by unanticipated problems, especially during the opening stages of combat, when all of the theoretical constructs, plans, and doctrines are put to the test. Infantrymen are a common victim of Murhpy's Law, and have thus codified their general observations in a list of Murphy's laws of Combat. Naturally, there are many variations of the list, but I'll only be referencing a few rules because I think they're a rather telling example of heuristics in use.

Most of the rules are concise and somewhat humorous (if it weren't for the subject matter) bits of wisdom such as "Incoming fire has the right of way," and though some are indeed factual, most are based on general observations or are meant to imply a heuristic. For instance:
Always keep in mind that your weapon was made by the lowest bidder.
This is, of course, a fact: most of the time, weapons are made by the lowest bidder. And yet, there is an unmistakable conclusion that one is supposed to reach when reading this rule: your weapon won't always work the way it is supposed to. That is also true, but it is worth noting that one must still rely on their weapon. If a soldier refused to fight unless he had a perfect weapon, he would never fight! This is an example of a heuristic which one must be aware of, but which one must use with caution. Weapons must be used, after all.
Perfect plans aren't

No plan survives the first few seconds of combat.

A Purple Heart just proves that you were smart enough to think of a plan, stupid enough to try it, and lucky enough to survive.
These laws refer to the difficulty in planning an action during the chaotic and unpredictable atmosphere of war. To go into battle without a plan is surely foolish, and yet, ironically, the plan rarely survives in tact (interestingly, these laws which indicate a failure of one heuristic, the necessity of planning, have become another: don't blindly follow the plan, especially when events don't conform to the plan). The ability to adapt and improvise is thus a treasured characteristic in a soldier.

I recently watched a few episodes of the excellent Band of Brothers series, and in one episode, a group of US soldiers assault a German artillery battery. Lieutenant Winters, the man planning the attack, instructs Sergeant�Lipton that he'll need TNT the moment his group reaches the first gun (so they can blow it up).

Of course, it doesn't quite go as planned, and Lipton is held up crossing the battlefield. Winters improvises, using what he has available (another soldier had some TNT, but no way to detonate it, so they used a German grenade they found in the nest). Once Lipton finally reaches Winters with the TNT, Winters simply points to the busted gun, illustrating the the plan has not survived.
 
If it wasn't war, the futility of Lipton's actions would have been a comical moment. Instead it is somewhat infuriating. I don't think his TNT was ever actually used, even though all 4 guns were taken out during the battle...

A couple of times above, I've said that something might be funny, if it wasn't about war, which was a point I sort of made in my earlier post:
When you're me, rooting for a sports team or betting modest amounts of money on a race, failure doesn't mean much. In other situations, however, failure is not so benign. Yet, despite the repercussions, failure is still inevitable and necessary in these situations. In the case of war, for instance, this can be indeed difficult and heartbreaking, but no less necessary.
When planning a war, it is necessary to rely on heuristics because you may not have all the information you need or the information you have might not be as accurate as you think. Unfortunately, there is no real way around this. Soldiers are forced to make decisions without all the facts, and must rely on imprerfect techniques to do so. It is a simple fact of life, and we would do well to consider these sorts of things when viewing battles from afar. For while it may seem like a war that exhibits such chaos and unpredictabilty is a failure, such is not really the case. In closing, I'll leave you with yet another law of combat, one I find particularly fitting:
If it's stupid but works, it's not stupid.
Posted by Mark on May 30, 2004 at 06:30 PM .: link :.


Security Beliefs
Last week, I wrote about superstition, inspired by an Isaac Asimov article called "Knock Plastic!" In revisiting that essay, I find that Asimov has collected 6 broad examples of what he calls "Security Beliefs" They are called this because such beliefs are "so comforting and so productive of feelings of security" that all men employ them from time to time. Here they are:
  1. There exist supernatural forces that can be cajoled or forced into protecting mankind.
  2. There is no such thing, really, as death.
  3. There is some purpose to the Universe.
  4. Individuals have special powers that will enable them to get something for nothing.
  5. You are better than the next fellow.
  6. If anything goes wrong, it's not one's own fault.
I've been thinking a lot about these things, and the extent to which they manifest in my life. When asked to explain my actions (usually only to myself), I can usually come up with a pretty good reason for doing what I did. But did I really do it for that reason?

Last week, I also referenced this: "It seems that our brains are constantly formulating alternatives, and then rejecting most of them at the last instant." What process do we use to reject the alternatives and eventually select the winner? I'd like to think it was something logical and rational, but that strikes me as something of a security belief in itself (or perhaps just a demonstration of Asimov's 5th security belief).

When we refer to logic, we are usually referring to a definitive conclusion that can be inferred from the evidence at hand. Furthermore, this deductive process is highly objective and repeatable, meaning that multiple people working under the same rules with the same evidence should all get the same (correct) answer. Obviously, this is a very valuable process; mathematics, for instance, is based on deductive logic.

However, there are limits to this kind of logic, and there are many situations in which it does not apply. For example, we are rarely in possession of all the evidence necessary to come to a logical conclusion. In such cases, decisions are often required, and we must fall back on some other form of reasoning. This is usually referred to as induction. This is usally based on some set of heuristics, or guidelines, which we have all been maintaining during the course of our lives. We produce this set of guidelines by extrapolating from our experiences, and by sharing our observations. Unlike deductive logic, it appears that this process is something that is innate, or at the very least, something that we are bred to do. It also appears that this process is very useful, as it allows us to operate in situations which we do not uderstand. We won't exactly know why we're acting the way we are, just that our past experience has shown that acting that way is good. It is almost a non-thinking process, and we all do it constantly.

The problem with this process is that it is inherently subjective and not always accurate. This process is extremely useful, but it doesn't invariably produce the desired results. Superstitions are actually heuristics, albeit generally false ones. But they arise because producing such explanations are a necessary part of our life. We cannot explain everything we see, and since we often need to act on what we see, we must rely on less than perfect heuristics and processes.

Like it or not, most of what we do is guided by these imperfect processes. Strangely, these non-thinking processes work exceedingly well; so much so that we are rarely inclined to think that there is anything "wrong" with our behavior. I recently stumbled upon this, by Dave Rodgers:
Most of the time, people have little real idea why they do the things they do. They just do them. Mostly the reasons why have to do with emotions and feelings, and little to nothing to do with logic or reason. Those emotions and feelings are the products of complex interactions between certain hardwired behaviors and perceptual receivers; a set of beliefs that are cognitively accessible, but most often function below the level of consciousness in conjunction with the more genetically fixed apparatus mentioned before; and certain habits of behavior which are also usually unconscious. ...

If we're asked "why" we did something, most of the time we'll be able to craft what appears to be a perfectly rational explanation. That explanation will almost invariably involve making assertions that cast ourselves in the best light. That is to say, among the set of possible explanations, we will choose the ones that make us feel best about ourselves. Some people have physical or mental deficiencies that cause them to make the opposite choice, but similar errors occur in either case. The explanation will not rely on the best available evidence, but instead will rely on ambiguous or incomplete information that is difficult to thoroughly refute, or false information which is nevertheless contained within the accepted set of shared beliefs, and which allows us to feel as good or bad about ourselves as we feel is normal.
Dave seems to think that the processes I'm referring to are "emotional" and "feeling" based but I am not sure that is so. Extrapolating from a set of heuristics doesn't seem like an emotional process to me, but at this point we reach a rather pedantic discussion of what "emotion" really is.

The point here is that our actions aren't always pefectly reasonable or rational, and that is not necessarily a bad thing. If we could not act unless we could reach a logical conclusion, we would do very little. We do things because they work, not necessarily because we reasoned that they would work before we did them. Afterwords, we justify our actions, and store away any learned heuristics for future use (or modify existing ones to account for the new data). Most of the time, this process works. However, these heuristics will fail from time to time as well. When you're me, rooting for a sports team or betting modest amounts of money on a race, failure doesn't mean much. In other situations, however, failure is not so benign. Yet, despite the repercussions, failure is still inevitable and necessary in these situations. In the case of war, for instance, this can be indeed difficult and heartbreaking, but no less necessary. [thanks to Jonathon Delacour for the Dave Rodgers post]
Posted by Mark on May 30, 2004 at 05:18 PM .: link :.


End of This Day's Posts

Sunday, May 23, 2004

Superstition
On of my favorite anecdotes (probably apocryphal, as these things usually go) tells of a horseshoe that hung on the wall over Niels Bohr's desk. One day, an exasperated visitor could not help asking, "Professor Bohr, you are one of the world's greatest scientists. Surely you cannot believe that object will bring you good luck." "Of course not," Bohr replied, "but I understand it brings you luck whether you believe or not."

I've had two occasions with which to be obsessively superstitious this weekend. The first was Saturday night's depressing Flyers game. Due to poorly planned family outing (thanks a lot Mike!), I missed the first period and a half of the game. During that time, the Flyers went down 2-0. As soon as I started watching, they scored a goal, much to my relief. But as the game grinded to a less than satisfactory close, I could not help but think, what if I had been watching for that first period?

Even as I thought that, though, I recognized how absurd and arrogant a thought like that is. As a fan, I obviously cannot participate in the game, but all fans like to believe they are a factor in the outcome of the game and will thus go to extreme superstitious lengths to ensure the team wins. That way, there is some sort of personal pride to be gained (or lost, in my case) from the team winning, even though there really isn't.

I spent the day today at the Belmont Racetrack, betting on the ponies. Longtime readers know that I have a soft spot for gambling, but that I don't do it very often nor do I ever really play for high stakes. One of the things I really enjoy is people watching, because some people go to amusing lengths to perform superstitious acts that will bring them that mystical win.

One of my friends informed me of his superstitious strategy today. His entire betting strategy dealt with the name of the horse. If the horse's name began with an "S" (i.e. Secretariat, Seabiscuit, etc...) it was bound to be good. He also made an impromptu decision that names which displayed alliteration (i.e. Seattle Slew, Barton Bank, etc...) were also more likely to win. So today, when he spied "Seaside Salute" in the program, which exhibited both alliteration and the letter "S", he decided it was a shoe-in! Of course, he only bet it to win, and it placed, thus he got screwed out of a modest amount of money.

John R. Velazquez, aboard Maddalena, rides to win the first race at Churchill DownsLike I should talk. My entire betting strategy revolves around John R. Velazquez, the best jockey in the history of horse racing. This superstition did not begin with me, as several friends discovered this guy a few years ago, but it has been passed on and I cannot help but believe in the power of JRV. When I bet on him, I tend to win. When I bet against him, he tends to be riding the horse that screws me over. As a result, I need to seriously consider the consequences of crossing JRV whenever I choose to bet on someone else.

Now, if I were to collect historical data regarding my bets for or against JRV (which is admittedly a very small data set, and thus not terribly conclusive either way, but stay with me here) I wouldn't be surprised to find that my beliefs are unwarranted. But that is the way of the superstition - no amount of logic or evidence is strong enough to be seriously considered (while any supporting evidence is, of course, trumpeted with glee).

Now, I don't believe for a second that watching the Flyers makes them play better, nor do I believe that betting on (or against) John R. Velazquez will increase (or decrease) my chances of winning. But I still think those things... after all, what could I lose?

This could be a manifestation of a few different things. It could be a relatively benign "security belief" (or "pleasing falsehood" as some like to call it - I'm sure there are tons of names for it) which, as long as you realize what you're dealing with can actually be fun (as my obsession with JRV is). It could also be brought on by what Steven Den Beste calls the High cliff syndrome.
It seems that our brains are constantly formulating alternatives, and then rejecting most of them at the last instant. ... All of us have had the experience of thinking something which almost immediately horrified us, "Why would I think such a thing?" I call it "High cliff syndrome".

At a viewpoint in eastern Oregon on the Crooked River, looking over a low stone fence into a deep canyon with sheer walls, a little voice inside me whispered, "Jump!" AAAGH! I became nervous, and my palms started sweating, and I decided I was no longer having fun and got back into my car and continued on my way.
It seems to be one of the profound truths of human existence that we can conceive of impossible situations that we know will never be possible. None of us are immune, from one of the great scientific minds of our time to the lowliest casino hound. This essay was, in fact, inspired by an Isaac Asimov essay called "Knock Plastic!" (as published in Magic) in which Asimov confesses his habitual knocking of wood (of course, he became a little worried over the fact that natural wood was being used less and less in ordinary construction... until, of course, someone introduced him to the joys of knocking on plastic). The insights driven by such superstitious "security beliefs" must indeed be kept into perspective, but that includes realizing that we all think these things and that sometimes, it really can't hurt to indulge in a superstition.

Update: More on Security Beliefs here.
Posted by Mark on May 23, 2004 at 09:32 PM .: link :.


End of This Day's Posts

Thursday, May 20, 2004

Let's Go Flyers!
I don't write about hockey much, but since my Flyers decided to make tonight interesting with their overtime goal in a must-win game, I figured I was due. I've never really played hockey, so I can't say as though I have a true understanding of the game, but I can follow it well and even though NHL 2004 has eaten my soul, those EA Sports games have always helped me understand the real game better. Fortunately for me, Colby Cosh has been writing really solid stuff on his 2004 NHL Playoffs page. He actually hasn't posted there for a while (no round 3 notes, it seems), but what's there is still worth reading. Here he describes the epic overtime victory by the Flyers over the Maple Leafs, ending the second round of the playoffs:
I have to say that the Toronto Maple Leafs--in dying--made up for 13 games' worth of intermittently lackluster play in the seven minutes of overtime against Philadelphia Tuesday night. If I had to show a foreigner a short piece of hockey footage to help him understand the excitement this game can create, I'd show him that OT. It wasn't just the way things ended, although that right there is a story for the grandkids. Even before the all-century finish, the seven minutes were full of odd-man rushes, wildly bouncing pucks, great saves by Robert Esche followed by heart-stopping rebounds, and other terrific hits.

Then Darcy Tucker, our generation's Eddie Shack, flattened Sami Kapanen with the most devastating, gasp-inducing hit you will ever see in sudden-death overtime. It wasn't "Is there a doctor in the house?"--it was "Is there a priest?". Kapanen, only twenty feet or so from the Philly bench, staged an epic mini-drama--alas, seen only later in replays--as he struggled valiantly to leave the ice, falling three times and losing his grip on his stick. He didn't know his own name during those seconds, but he did the right thing. If he'd stayed down and tried to draw a charging penalty, play would have stopped. Instead, Kapanen was physically hauled over the boards by the off-ice Flyers, and Jeremy Roenick--himself playing with a shattered face and a largely fused spine--vaulted over him to set up a winning two-on-one. You don't get this kind of stuff in baseball.
Tonights playoffs had a similarly exciting feel to it, though perhaps not quite as spectacular as there wasn't as much freewheeling back-and-forth play (but since most of the play included the Flyers in the offensive zone, it was damn exciting for me:P) With any luck, the Flyers will be able to harness that momentum for game 7 and then head for the Stanley Cup.

If the Flyers can pull this off, I think we'll be in for a spectacular Stanley Cup finals. Both Keith Primeau and Jarome Iginla have been obscenely dominant clutch players during the playoffs, and they're both really nice guys. It should make for a great series. But first things first. The Flyers need to win game 7 in Tampa on Saturday! Go Flyers!
Posted by Mark on May 20, 2004 at 11:21 PM .: link :.


End of This Day's Posts

Sunday, May 02, 2004

The Unglamorous March of Technology
We live in a truly wondrous world. The technological advances over just the past 100 years are astounding, but, in their own way, they're also absurd and even somewhat misleading, especially when you consider how these advances are discovered. More often than not, we stumble onto something profound by dumb luck or by brute force. When you look at how a major technological feat was accomplished, you'd be surprised by how unglamorous it really is. That doesn't make the discovery any less important or impressive, but we often take the results of such discoveries for granted.

For instance, how was Pi originally calculated? Chris Wenham provides a brief history:
So according to the Bible it's an even 3. The Egyptians thought it was 3.16 in 1650 B.C.. Ptolemy figured it was 3.1416 in 150 AD. And on the other side of the world, probably oblivious to Ptolemy's work, Zu Chongzhi calculated it to 355/113. In Bagdad, circa 800 AD, al-Khwarizmi agreed with Ptolemy; 3.1416 it was, until James Gregory begged to differ in the late 1600s.

Part of the reason why it was so hard to find the true value of Pi (π) was the lack of a good way to precisely measure a circle's circumference when your piece of twine would stretch and deform in the process of taking it. When Archimedes tried, he inscribed two polygons in a circle, one fitting inside and the other outside, so he could calculate the average of their boundaries (he calculated ? to be 3.1418). Others found you didn't necessarily need to draw a circle: Georges Buffon found that if you drew a grid of parallel lines, each 1 unit apart, and dropped a pin on it that was also 1 unit in length, then the probability that the pin would fall across a line was 2/π. In 1901, someone dropped a pin 34080 times and got an average of 3.1415929.
π is an important number and being able to figure out what it is has played a significant factor in the advance of technology. While all of these numbers are pretty much the same (to varying degrees of precision), isn't it absurd that someone figured out π by dropping 34,000 pins on a grid? We take π for granted today; we don't have to go about finding the value of π, we just use it in our calculations.

In Quicksilver, Neal Stephenson portrays several experiments performed by some of the greatest minds in history, and many of the things they did struck me as especially unglamorous. Most would point to the dog and bellows scene as a prime example of how unglamorous the unprecedented age of discovery recounted in the book really was (and they'd be right), but I'll choose something more mundane (page 141 in my edition):
"Help me measure out three hundred feet of thread," Hooke said, no longer amused.

They did it by pulling the thread off of a reel, and stretching it alongside a one-fathom-long rod, and counting off fifty fathoms. One end of the thread, Hooke tied to a heavy brass slug. He set the scale up on the platform that Daniel had improvised over the mouth of the well, and put the slug, along with its long bundle of thread, on the pan. He weighed the slug and thread carefully - a seemingly endless procedure disturbed over and over by light gusts of wind. To get a reliable measurement, they had to devote a couple of hours to setting up a canvas wind-screen. Then Hooke spent another half hour peering at the scale's needle through a magnifying lens while adding or subtracting bits of gold foil, no heavier than snowflakes. Every change caused the scale to teeter back and forth for several minutes before settling into a new position. Finally, Hooke called out a weight in pounds, ounces, grains, and fractions of grains, and Daniel noted it down. Then Hooke tied the free end of the thread to a little eye he had screwed on the bottom of the pan, and he and Daniel took turns lowering the weight into the well, letting it drop a few inches at a time - if it got to swinging, and scraped against the chalky sides of the hole, it would pick up a bit of extra weight, and ruin the experiment. When all three hundred feet had been let out, Hooke went for a stroll, because the weight was swinging a little bit, and its movements would disturb the scale. Finally, it settled down enough that he could go back to work with his magnifying glass and his tweezers.
And, of course, the experiment was a failure. Why? The scale was not precise enough! The book is filled with similar such experiments, some successful, some not.

Another example is telephones. Pick one up, enter a few numbers on the keypad and voila! you're talking to someone halfway across the world. Pretty neat, right? But how does that system work, behind the scenes? Take a look at the photo on the right. This is a typical intersection in a typical American city, and it is absolutely absurd. Look at all those wires! Intersections like that are all over the world, which is the part of the reason I can pick up my phone and talk to someone so far away. One other part of the reason I can do that is that almost everyone has a phone. And yet, this system is perceived to be elegant.

Of course, the telephone system has grown over the years, and what we have now is elegant compared to what we used to have:
The engineers who collectively designed the beginnings of the modern phone system in the 1940's and 1950's only had mechanical technologies to work with. Vacuum tubes were too expensive and too unreliable to use in large numbers, so pretty much everything had to be done with physical switches. Their solution to the problem of "direct dial" with the old rotary phones was quite clever, actually, but by modern standards was also terribly crude; it was big, it was loud, it was expensive and used a lot of power and worst of all it didn't really scale well. (A crossbar is an N� solution.) ... The reason the phone system handles the modern load is that the modern telephone switch bears no resemblance whatever to those of 1950's. Except for things like hard disks, they contain no moving parts, because they're implemented entirely in digital electronics.
So we've managed to get rid of all the moving parts and make things run more smoothly and reliably, but isn't it still an absurd system? It is, but we don't really stop to think about it. Why? Because we've hidden the vast and complex backend of the phone system behind innocuous looking telephone numbers. All we need to know to use a telephone is how to operate it (i.e. how to punch in numbers) and what number we want to call. Wenham explains, in a different essay:
The numbers seem pretty simple in design, having an area code, exchange code and four digit number. The area code for Manhattan is 212, Queens is 718, Nassau County is 516, Suffolk County is 631 and so-on. Now let's pretend it's my job to build the phone routing system for Emergency 911 service in the New York City area, and I have to route incoming calls to the correct police department. At first it seems like I could use the area and exchange codes to figure out where someone's coming from, but there's a problem with that: cell phone owners can buy a phone in Manhattan and get a 212 number, and yet use it in Queens. If someone uses their cell phone to report an accident in Queens, then the Manhattan police department will waste precious time transferring the call.

Area codes are also used to determine the billing rate for each call, and this is another way the abstraction leaks. If you use your Manhattan-bought cell phone to call someone ten yards away while vacationing in Los Angeles, you'll get charged long distance rates even though the call was handled by a local cell tower and local exchange. Try as you might, there is no way to completely abstract the physical nature of the network.
He also mentions cell phones, which are somewhat less absurd than plain old telephones, but when you think about it, all we've done with cell phones is abstract the telephone lines. We're still connecting to a cell tower (which need to be placed with high frequency throughout the world) and from there, a call is often routed through the plain old telephone system. If we could see the RF layer in action, we'd be astounded; it would make the telephone wires look organized and downright pleasant by comparison.

The act of hiding the physical nature of a system behind an abstraction is very common, but it turns out that all major abstractions are leaky. But all leaks in an abstraction, to some degree, are useful.

One of the most glamorous technological advances of the past 50 years was the advent of space travel. Thinking of the heavens is indeed an awe-inspiring and humbling experience, to be sure, but when you start breaking things down to the point where we can put a man in space, things get very dicey indeed. When it comes to space travel, there is no more glamorous a person than the astronaut, but again, how does one become an astronaut? The need to pour through and memorize giant telephone-sized books filled with technical specifications and detailed schematics. Hardly a glamorous proposition.

Steven Den Beste recently wrote a series of articles concerning the critical characteristics of space warships, and it is fascinating reading, but one of the things that struck me about the whole concept was just how unglamorous space battles would be. It sounds like a battle using the weapons and defenses described would be punctuated by long periods of waiting followed by a short burst of activity in which one side was completely disabled. This is, perhaps, the reason so many science fiction movies and books seem to flaunt the rules of physics. As a side note, I think a spectacular film could be made while still obeying the rules of physics, but that is only because we're so used to the absurd physics defying space battles.

None of this is to say that technological advances aren't worthwhile or that those who discover new and exciting concepts are somehow not impressive. If anything, I'm more impressed at what we've achieved over the years. And yet, since we take these advances for granted, we marginalize the effort that went into their discovery. This is due in part to the necessary abstractions we make to implement various systems. But when abstractions hide the crude underpinnings of technology, we see that technology and its creation as glamorous, thus bestowing honors upon those who make the discovery (perhaps for the wrong reasons). It's an almost paradoxal cycle. Perhaps because of this, we expect newer discoveries and innovations to somehow be less crude, but we must realize that all of our discoveries are inherently crude.

And while we've discovered a lot, it is still crude and could use improvements. Some technologies have stayed the same for thousands of years. Look at toilet paper. For all of our wondrous technological advances, we're still wiping our ass with a piece of paper. The Japanese have the most advanced toilets in the world, but they've still not figured out a way to bypass the simple toilet paper (or, at least, abstract the process). We've got our work cut out for us. Luckily, we're willing to go to absurd lengths to achieve our goals.
Posted by Mark on May 02, 2004 at 09:47 PM .: link :.


End of This Day's Posts

Sunday, April 25, 2004

Iraqi Ghosts, Puritans, and Geeks
Just a few interesting things I've stumbled across recently:
  • Baghdad Journal Part 10: Yet another installment in Steve Mumford's excellent series. As always, it's an eye-opening look on the ground in Iraq. Great if you're looking for a different perspective. If you like it, check out all of Mumford's other articles. This time around, Mumford describes more of the interactions between American commanders and Iraqi leaders and people. This is, as always, fascinating reading. He even finds time to mention some ghost stories:
    You can still walk through the long empty corridors between companies and feel like there's not a soul around. Except ghosts. One evening, Lt. Jack Nothstine takes me up to the second floor to poke around with flashlights. The miles of burned rooms and corridors are empty of anything other than broken glass, plaster and the hulks of old medical equipment. Wires are dangling from the ceilings. "One night I was coming up the stairs to take over guard duty on the roof. Just when I was passing the second floor I clearly heard children's voices, speaking in Arabic, like they were playing. It was completely distinct. This base is in the middle of nowhere -- there are no kids around for miles. I just ran! "A lot of guys have seen ghosts here. The medics have seen some of their patients that died on them."
    Spooky. Read the whole thing.
  • Neal Stephenson Interview in Salon: A long and detailed interview with Neal Stephenson about his new book, The Confusion (the second in the Baroque Cycle, the first being Quicksilver). It's at Salon, so you'll need to sit through a commercial to get it, but it's worth it... A short excerpt about Stephenson's sympathetic treatment of the puritans in his novels:
    I have a perverse weakness for past generations that are universally reviled today. The Victorians have a real bad name, and the word "Puritan" is never used except in a highly pejorative way, despite the fact that there are very strong Victorian and Puritan threads in our society today, and despite the fact that the Victorians and Puritans built the countries that we live in.
  • I usually hate internet quiz type things, but I took the Polygeek quiz and the resulting paragraph described my life much more accurately than these things normally do:
    You are a geek liaison, which means you go both ways. You can hang out with normal people or you can hang out with geeks which means you often have geeks as friends and/or have a job where you have to mediate between geeks and normal people. This is an important role and one of which you should be proud. In fact, you can make a good deal of money as a translator.
    Normal: Tell our geek we need him to work this weekend.

    You [to Geek]: We need more than that, Scotty. You'll have to stay until you can squeeze more outta them engines!

    Geek [to You]: I'm givin' her all she's got, Captain, but we need more dilithium crystals!

    You [to Normal]: He wants to know if he gets overtime.
    Wow. I was 32% geek, which sounds awfully low to me, but that paragraph is dead on:P
  • As you may have noticed, the random best entries picture is up (over there on the right). I'm still working on making images for several entries, but there are enough there for now... I've also updated my Links section of the website. It's not perfect and I'm still missing lots of stuff, but it's a start and it's much better than what was there before.
That's all for now, stay tuned for the unglamorous technology post (it's coming, I swear!)
Posted by Mark on April 25, 2004 at 11:14 AM .: link :.


End of This Day's Posts

Sunday, April 18, 2004

Quick Updates
Sorry for the lack of updates recently. I've been exceedingly busy lately, with no end in sight. And since my chain-smoking monkey research staff, emboldened by the Simpsons voice talent, have gone on strike, I don't have a whole lot of stuff to even point to. However, I'd like to make a few quick updates to some recent posts:
  • Thinking about Security: At one point in this post, I mentioned this:
    ...in order to make your computer invulnerable to external attacks from the internet, all you need to do is disconnect it from the internet. However, that means you can no longer access the internet! That is the price you pay for a perfectly secure solution to internet attacks. And it doesn't protect against attacks from those who have physical access to your computer. Also, you presumably want to use the internet, seeing as though you had a connection you wanted to protect. The old saying still holds: A perfectly secure system is a perfectly useless system.
    Not too long after I wrote that, I recieved a notice at work saying that they were shutting down internet access due to a security vulnerability in some of the software we use. A week later, patches had been installed and we were back up and running. It was an interesting week, however, as we realized just how much we relied on internet access to do our jobs (us being a website and all!). So in cases like this, the pefectly secure but useless system can be acceptable for short periods of time. As a permanent solution, it simply wouldn't work though...
  • Inherently Funny Words, Humor, and Howard Stern: I got to thinking after writing this about politically correct terminology, and I realized that one of Stern's true strongpoints is his willingness to be politically incorrect, because the very act of railing against what is politically correct is funny in itself. A lot of humor is based on this sort of concept: it's not funny because of what it depicts, it's funny because it flies in the face of censorship. One of Stern's funniest bits from his movie, for instance, was one in which he played a "complete the sentence" game with things like "blank a doodle doo", which technically allowed him to say "cock" on the air. That's funny, not because "cock" is funny, but because he wasn't allowed to say it. In a world where we are forbidden to have blackboards in schools (because they're racist!), it's no wonder that people find political incorrectness funny. Personally, I try not to hurt anyone's feelings when referring to them, but this stuff does get out of hand, and when people intentially break from the norm, it can be funny. Again, it may not be your thing, but it was just a thought...
That's all for now. Hopefully, in a month or so, things will be slowing down and I'll have more time to write. I still seem to be sticking to my schedule of posting every Sunday, but the weekday posts may be a bit scarce until things calm down.
Posted by Mark on April 18, 2004 at 12:46 PM .: link :.


End of This Day's Posts

Sunday, March 21, 2004

Inherently Funny Words, Humor, and Howard Stern
Here's a question: Which of the following words is most inherently funny?
  • Boob (and its variations, such as boobies and boobery)
  • Chinchilla
  • Aardvark
  • Urinal
  • Stroganoff
  • Poopie
  • Underpants
  • Underroos
  • Fart
  • Booger
Feel free to advocate your favorites or suggest new ones in the comments. Some words are just funny for no reason. Why is that? In Neil Simon's The Sunshine Boys, a character says:
Words with a 'k' in it are funny. Alkaseltzer is funny. Chicken is funny. Pickle is funny. All with a 'k'. 'L's are not funny. 'M's are not funny. Cupcake is funny. Tomatoes is not funny. Lettuce is not funny. Cucumber's funny. Cab is funny. Cockroach is funny -- not if you get 'em, only if you say 'em.
Well, that is certainly a start, but it doesn't really tell the whole story. Words with an "oo" sound are also often funny, especially when used in reference to bodily functions (as in poop, doody, booger, boobies, etc...) In fact, bodily functions are just plain funny. Witness fart.

Of course, ultimately it's a subjective thing. To me, boobies are funnier than breasts, even though they mean the same thing. To you, perhaps not. It's the great mystery of humor, and one of the most beautiful things about laughter is that it happens involuntarily. We don't (always) have to think about it, we just do it. Here's a quote from Dennis Miller to illustrate the point:
The truth is the human sense of humor tends to be barbaric and it has been that way all along. I'm sure on the eve of the nativity when the tall Magi smacked his forehead on the crossbeam while entering the stable, Joseph took a second away from pondering who impregnated his wife and laughed his little carpenter ass off. A sense of humor is exactly that: a sense. Not a fact, not etched in stone, not an empirical math equation but just what the word intones: a sense of what you find funny. And obviously, everybody has a different sense of what's funny. If you need confirmation on that I would remind you that Saved by the Bell recently celebrated the taping of their 100th episode. Oh well, one man's Molier is another man's Screech and you know something thats the way it should be.
There has been a lot of controversy recently about the FCC's proposed fines against Howard Stern (which may have been temporarily postponed). Stern has been fined many times before, including "$600,000 after Stern discussed masturbating to a picture of Aunt Jemima." Stern, of course, has flown off the handle at the prospect of new fines. Personally, I think he's overreacting a bit by connecting the whole thing with Bush and the religious right, but part of the reason he is so successful is that his overreaction isn't totally uncalled for. At the core of his argument is a serious concern about censorship, and a worry about the FCC abusing it's authority.

On the other hand, some people don't see what all the fuss is about. What's wrong with having a standard for the public airwaves that broacasters must live up to? Well, in theory, nothing. I'm not wild about the idea, but there are things I can understand people not wanting to be broadcast over public airwaves. The problem here is what is acceptible.

Just what is the standard? Sure, you've got the 7 dirty words, that's easy enough, but how do you define decency? The fines proposed against Stern are supposedly from a 3 year old broadcast. Does that sound right to you? Recently Stern wanted to do a game in which the loser had to let someone fart in their face. Now, I can understand some people thinking that's not very nice, but does that qualify as "indecent"? Apparently, it might, and Stern was not allowed to proceed with the game (he was given the option to place the looser in a small booth, and then have someone fart in the booth). Would it actually have resulted in a fine? Who knows? And that is what the real problem with standards are. If you want to propose a standard, it has to be clear and you need to straddle a line between what is hurtful and what is simply disgusting or offensive. You may be upset at Stern's asking a Nigerian woman if she eats monkeys, but does that deserve a fine from the government? And how much? And is it really the job of the government to decide these sorts of things? In the free market, advertisers can choose (and have chose) not to advertise on Stern's program.

At the bottom of this post, Lawrence Theriot makes a good point about that:
Yes a lot of what Stern does could be considered indecent by a large portion of the population (which is the Supreme Court standard) but in this case it's important to consider WHERE those people might live and to what degree they are likely to be exposed to Stern's brand of humor before you decide that those people need federal protection from hearing his show. Or, in other words, might the market have already acted to protect those people in a very real way that makes Federal action unnecessary?

Stern is on something like 75 radio stations in the US and almost every one of them is concentrated in a city. Most people who think Stern is indecent do not live in city centers. They tend to live in "fly-over" country where Stern's show does not reach.

Rush Limbaugh by comparison (which no one could un-ironically argue is indecent in any way) is on 600 stations around the country, and reaches about the same number of listeners as Howard does (10 million to 14 million I think). So in effect, we can see that the market has acted to protect most of those who do not want to hear the kind of radio that Stern does. Stern's show, which could be considered indecent is not very widely available, when you compare it to Limbaugh's show which is available in virtually every single corner of the country, and yet a comparable number of people seem to want to tune in to both shows.

Further, when you take into account the fact that in a city like Miami (where Stern was taken off the air last week) there may be as many as a million people who want to hear his show, any argument that Stern needs to be censored on indecency grounds seems to fly right out the window.

Anyway, I think both sides are making some decent points in this argument, but I hadn't heard one up until now that took the market and demographics into account until last night, and we all know how much faith I put in the market to solve a lot of society's toughest questions, so I thought I'd point this one out as having had an impact on me.
In the end, I don't know the answer, but there is no easy solution here. I can see why people want standards, but standards can be quite impractical. On the other hand, I can see why Stern is so irate at the prospect of being fined for something he said 3 years ago - and also never knowing if what he's going to say qualifies as "indecent" (and not really being able to take such a thing to court to really decide). Dennis Miller again:
We should question it all; poke fun at it all; piss off on it all; rail against it all; and most importantly, for Christ's sake, laugh at it all. Because the only thing separating holy writ from complete bullshit is your perspective. Its your only weapon. Keep the safety off. Don't take yourself too seriously.
In the end, Stern makes a whole lot of people laugh and he doesn't take himself all that serious. Personally, I don't want to fine him for that, but if you do, you need to come up with a standard that makes sense and is clear and practical to implement. I get the feeling this wouldn't be an issue if he was clearly right or clearly wrong...
Posted by Mark on March 21, 2004 at 09:04 PM .: link :.


End of This Day's Posts

Thursday, March 18, 2004

Elephants and the Media
I've been steadily knocking off films from my 2003 Should Have Seem Em list. Among the films recently viewed was Gus Van Sant's striking Elephant. The film portrays the massacre at an ordinary high school much like Columbine (I originally thought it was Columbine, and the similarities are numerous, but apparently not). It simply shows the events as they unfold, from the ordinary morning to the massacre that follows. There is no explanation, no preaching about the ills of modern society, no empty solutions proffered. It is the events of one day, as seen by a number of people, laid bare. Van Sant employs the use of a series of long tracking shots, following this person or that, to lend an air of detached documentary to the film, and it works. This lack of sensationalism was a bold move, but I think the correct one, and it's the only way a movie about such a thing could possibly be relevant. Van Sant has said of this film: "I want the audience to make its own observations and draw its own conclusions," and I think he has succeeded admirably.

Roger Ebert wrote an excellent review of the movie, and in it, he comments:
Let me tell you a story. The day after Columbine, I was interviewed for the Tom Brokaw news program. The reporter had been assigned a theory and was seeking sound bites to support it. "Wouldn't you say," she asked, "that killings like this are influenced by violent movies?" No, I said, I wouldn't say that. "But what about 'Basketball Diaries'?" she asked. "Doesn't that have a scene of a boy walking into a school with a machine gun?" The obscure 1995 Leonardo Di Caprio movie did indeed have a brief fantasy scene of that nature, I said, but the movie failed at the box office (it grossed only $2.5 million), and it's unlikely the Columbine killers saw it.

The reporter looked disappointed, so I offered her my theory. "Events like this," I said, "if they are influenced by anything, are influenced by news programs like your own. When an unbalanced kid walks into a school and starts shooting, it becomes a major media event. Cable news drops ordinary programming and goes around the clock with it. The story is assigned a logo and a theme song; these two kids were packaged as the Trench Coat Mafia. The message is clear to other disturbed kids around the country: If I shoot up my school, I can be famous. The TV will talk about nothing else but me. Experts will try to figure out what I was thinking. The kids and teachers at school will see they shouldn't have messed with me. I'll go out in a blaze of glory."

In short, I said, events like Columbine are influenced far less by violent movies than by CNN, the NBC Nightly News and all the other news media, who glorify the killers in the guise of "explaining" them. I commended the policy at the Sun-Times, where our editor said the paper would no longer feature school killings on Page 1. The reporter thanked me and turned off the camera. Of course the interview was never used. They found plenty of talking heads to condemn violent movies, and everybody was happy.
Ouch. The entire review is good, so check it out.
Posted by Mark on March 18, 2004 at 08:56 PM .: link :.


End of This Day's Posts

Sunday, March 07, 2004

Ender's Humility
Thanks to Chris Wenham's short story Clear as mud, I've been craving a good science fiction novel. So I started reading Ender's Game by Orson Scott Card. It's an excellent book, and though I have not yet finished the book, Card makes a lot of interesting choices. For those interested, there will be spoilers ahead.

The story takes place in the distant future where aliens have attacked earth twice, almost destroying the human race. To prepare for their next encounter with the aliens, humans band together under a world government and go about breeding military geniuses, and training them. The military pits students against each other in a series of warlike "games." Andrew "Ender" Wiggin is one such genius, but his abilities are far and above everyone else. This is in part due to his natural talent, but it is also due to certain personality traits: curiosity, an analytical thought process, and humility (among others).

The following passage takes place just after Ender commands his new army to a spectacular victory in just his first match as commander. It was such a spectacular victory, in fact, that Ender becomes a subject of ire amongst the other commanders.
Carn Carby made a point of coming to greet Ender before the lunch period ended. It was, again, a gracious gesture, and, unlike Dink, Carby did not seem wary. "Right now I'm in disgrace," he said frankly. "They won't believe me when I tell them you did things that nobody's ever seen before. So I hope you beat the snot out of the next army you fight. As a favor to me."

"As a favor to you," Ender said. "And thanks for talking to me."

"I think they're treating you pretty badly. Usually new commanders are cheered when they first join the mess. But then, usually a new commander has had a few defeats under his belt before he first makes it here. I only got here a month ago. If anybody deserves a cheer, it's you. But that's life. Make them eat dust."

"I'll try." Carn Carby left, and Ender mentally added him to his private list of people who also qualified as human beings.
One of the interesting things about Ender is that he's not perfect, and he freely admits it all the time. His humility is essential. Failure doesn't matter unless you learn from your failures (the ceramics parable is a recent example of this sort of thing). Ender doesn't fail much, but he's not afraid to confront the reality that someone might think of something he hasn't thought of. He relies on others to help him all the time. The passage above shows how much Ender values humility in his peers as well.

I don't know why Ender's humility surprised me, as Ender is, after all, only human. But it did. It's an interesting perspective, and I'm enjoying the book a lot. As I said, I haven't finished it yet, so for all I know, he becomes an arrogant and ignorant prick towards the end of the novel, but I doubt that. Ender's humility is integral to his success, as humility plays an important part in success. We'll need to keep this in mind, and point out failures we're making as they happen so that we can learn from them and apply those lessons. Naturally, everone will disagree with each other as to what constitutes a failure and what lessons must be learned from which actions, but criticism never bothers me unless it's of the mean spirited unproductive variety. In short, I take Lileks' Andre the Giant philosophy:
Look. I'm a big-tent kinda guy. I'm willing to embrace all sorts of folk whose agendas may differ from mine, as long as we share the realization that there are many many millions out there who want us stone-cold bleached-bones dead. It?s the Andre the Giant philosophy, expressed in "Princess Bride":

I hope we win.

That's all. If you can agree with that without doing a Horshack twitch, intent on adding conditions - oh! oh! what about genetically modified soy? - then we understand each other. We know that we have many disagreements, but we agree: I hope we win. Oh, we can argue about every word in that four-syllable statement. But when it comes down to it all, we're on the same page.

I hope we win.

Now let's pick it apart. Who's we? And what does win mean?
Well, I hope we win.
Posted by Mark on March 07, 2004 at 08:57 PM .: link :.


End of This Day's Posts

Sunday, February 08, 2004

Mastery
Dan Gable, from the 1972 Olympics Last week, I wrote a biography for Dan Gable. Because the sport at which Gable excelled was wrestling, most have not heard of him, but within the sport he is a legend. That's him over there on the right, pictured with his Gold Medal from the 1972 Olympics (in which he went undefeated and, indeed, didn't give up a single point - much to the dismay of the Soviets, who had vowed to "scour the country" looking for someone to defeat Gable). His story is an interesting one, but one thing I'm not so sure I captured in my piece was just how obsessed with wrestling he was. He lived, ate, and drank wrestling. When asked what interests he has besides wrestling, the first thing he says is "Recovery" (of course, he has to be completely exhausted to partake in that activity). How he managed to start a family, I will never know (perhaps he wasn't quite as obsessed as I thought). It made me wonder if being that good at something was worth it...

There is an old saying "Jack of all trades, Master of none." This is indeed true, though with the demands of modern life, we are all expected to live in a constant state of partial attention and must resort to drastic measures like Self-Censorship or information filtering to deal with it all. This leads to an interesting corollary for the Master of a trade: They don't know how to do anything else!

I'm reminded of a story told by Isaac Asimov, in his essay Thinking about Thinking (which can be found in the Magic collection):
On a certain Sunday, something went wrong with my car and I was helpless. Fortunately, my younger brother, Stan, lived nearby and since he is notoriously goodhearted, I called him. He came at once, absorbed the situation, and began to use the Yellow Pages by the telephone to try to reach a service station, while I stood by with my lower jaw hanging loose. Finally, after a period of strenuous futility, Stan said to me with just a touch of annoyance, "With all your intelligence, Isaac, how is it you lack the brains to join the AAA?" Whereupon, I said, "Oh, I belong to the AAA," and produced the card. He gave me a long strange look and called the AAA. I was on my wheels in half an hour.
He tells this story as part of a discussion on the nature of intelligence and how one is judged to be intelligent. Which brings up an interesting point, how does one even know they are master of a trade? Nowadays, there are few who know one trade so well that all others suffer; we're mostly jacks, to some degree. There are some who are special, who can focus all of their energy into a single pursuit with great success. These people are extraordinarily rare, and somewhat scary in that they can be so brilliant in one sphere, but so clueless in another, more prosaic, department. But that does not help us in diagnosing mastery of a trade.

When you really start to get into it, of course, the metaphor breaks down. Personally, I wouldn't consider myself a master of any trades, but neither would I judge myself a jack. There are several subjects at which I excell, but I can't seem to focus on any one of them - mostly because I like them all so much and I cannot bring myself to narrowly focus my efforts on a single subject. I have my moments of absent-mindedness too, though none quite so drastic as Asimov's amusing tale. But even if I did focus my efforts, how would I know when I've reached the point of mastery?

In the end, I don't think you can tell. Mastery is a worthwhile goal, even if you must sacrifice some of your favorite trades, but because we cannot tell when we've mastered a subject, the term really doesn't have much meaning. As Asimov implies in his aformentioned essay, the only really useful term is "different." It is this difference which is truly important, because what some of us cannot do, others can. This is the basis of society and civilization, and the reason we as humans have prospered as individuals.

And Just for fun, an Asimov Quote:
"Those people who think they know everything are a great annoyance to those of us who do." -Isaac Asimov
Damn straight.

Update 2.15.04: John Weidner suggests "that when the time comes that we re-open diplomatic relations with Iran, Dan Gable should be our ambassador." He makes a note of how Iranians have previously greeted "The Great Satans' wrasslin' team" with enthusiasm (and a cool Neal Stephenson book).
Posted by Mark on February 08, 2004 at 04:17 PM .: link :.


End of This Day's Posts

Tuesday, January 20, 2004

Self-Censorship
I've noticed a trend in my writing, or, rather, the lack thereof. There are generally four venues in which I write, three of which are on the internet, and one of which is for my job. In the three internet venues, my production has started relatively high, and steadily decreased as time went on. (I suppose I should draw a distinction between writing and simple conversation. Email, for example, is not included as that does not represent the type of writing I'm talking about, though I do write a lot of email and email could possibly become a venue in the future.)

My job sometimes entails the writing of technical specifications for web applications, and this, at least, does not suffer from the same problem. It can be challenging at times, especially if I need to tailor them towards both a technical and non-technical audience, but for the most part it is a straightforward affair (it helps that they pay me too). Once I have all the information, resources, and approvals I need, the writing comes easy (well, I'm simplifying for the sake of discussion here, but you get the point).

This is in part because technical writing doesn't need to be compelling, which is where I stumble. It's also because collecting information and resources for this sort of thing is simpler and the information is easier to organize. I'm not especially articulate when it comes to expressing my thoughts and ideas. If I ever do it's only because I've spent an inordinate amount of time polishing the text (and if I don't, I'm in trouble, because I've spent an inordinate amount of time polishing the text). Hell, I tried to be organized and wrote a bit of an outline for this post, but I had trouble doing even that.

And, of course, I notice that I'm not following my outline either. But I digress.

The other three venues are my weblog (natch), Everything2, and various discussion forums.

This weblog has come a long way over the three and a half years since I started it, and at this point, it barely resembles what it used to be. I started out somewhat slowly, just to get an understanding of what this blogging thing was and how to work it (remember, this was almost four years ago and blogs weren't nearly as common as they are now), but I eventually worked up into posting about once a day, on average. At that time, a post consisted mainly of a link and maybe a summary or some short commentary. Then a funny thing happened, I noticed that my blog was identical to any number of other blogs, and thus wasn't very compelling. So I got serious about it, and started really seeking out new and unusual things. I tried to shift focus away from the beaten path and started to make more substantial contributions. I think I did well at this, but it couldn't really last. It was difficult to find the offbeat stuff, even as I poured through massive quantities of blogs, articles and other information (which caused problems of it's own). I slowed down, eventually falling into an extremely irregular posting schedule on the order of once a month, which I have since attempted to correct, with, I hope, some success. I recently noticed that I have been slumping somewhat, though I'm still technically keeping to my schedule.

During the period in which I wasn't posting much on the weblog, I was "noding" (as they call it) over at Everything2, which is a collaborative database project. There too, I started strong and have since petered out. However, similar to what happened in the weblog, the quality improved even as the quantity decreased. This is no coincidence. It takes longer to write a good node, so it makes sense that the quantity would be inversely proportional to the quality.

Of the three internet venues, discussion forums are the simplest as they are informal and require the least amount of vigor (and in that respect, they resemble email, but there is a small difference which we will come to in a bit). Even then, though, in certain forums I have noticed my production fall as well. These are predominantly debating forums where I was making some form of argument. What I found was that, as time went on, I tended to take the debates more seriously and thus I spent more time and effort on making sure my arguments were logically consistent and persuasive. And again, my posting at these forums has slowed considerably.

One other note about these three: it seems that at any given time, I am only significantly contributing to one of these three. When the blog posting slowed, I moved to E2, for example, and when that slowed down, I focused on the forums. Now that I've come back to the blog, the others have suffered. There are all sorts of reasons why writing slows that have nothing to do with the process of writing or choosing what to write, but I do think those things contribute as well.

In effect, this represents a form of self-censorship. I'm constantly evaluating ideas for inclusion in the weblog. Johnathon wrote about this a few weeks ago, and he put it well:
...having a weblog turns information overload into a two-way process: first you suck all this stuff into your head for processing; and then you regurgitate it as weblog posts. And, while this process isn't all that different from the ways in which we manipulate information in our jobs, it's something that we've chosen to do in addition to our jobs, something that detaches us even further from "real life". I suspect that the problem is compounded by the fact that weblog entries are—overwhelmingly—expressions of opinion and, to make it worse, many of the opinions are opinions about opinions on issues concerning which the opinionators have little, if any, firsthand knowledge or experience. Me included.
As time goes on, my evaluation of what is blog-worthy has gotten more and more discriminating (as always, there are exceptions) and the quality has gone up. But, of course, the quantity has gone down.

Why? Why do I keep doing this? It is tempting to write it off as laziness, and that is no doubt part of it. It's not like it takes me a week to write a post or a node. At most, it takes a combined few hours.

Part of the problem is finding a few uninterrupted hours with which to compose something. In all of my writing endeavors, I've set the bar high enough that it requires too much time to do at once. When I didn't expect much out of myself on the blog or on E2, I could produce a lot more because the time required to do so was small enough that I could do so quickly and effectively. Back in the day, I could blog during my lunch break. I haven't been able to do that lately (as in, the past few years).

The natural solution to that is to split up writing sessions, and that is what I often do, but there are difficulties with that. First, it breaks concentration. Each writing session needs to start with several minutes of re-familiarizing with the subject. So even the sessions need to be reasonably large chunks of time. In addition, if these chunks are spread out too far, you run the risk of losing interest and motivation (and it takes longer to re-familiarize yourself too).

Motivation can be difficult to sustain, especially over long periods of time, which might also be the reason why I seem to rotate between the three internet venues.

There is an engineering proverb that says Fast, Good, Cheap - Pick two. The idea is that when you're tackling a project, you can't have all three. If you favor making a quality product in a short period of time, it is going to cost you. Similarly, if you need to do it on the cheap and also in a short period of time, you're not going to end up with a quality product. I think there might be some sort of corollary at work here, Quality, Quantity, Time - Pick Two. Meaning that if I want to write a high quality post in a relatively short period of time, the quantity will suffer. If I want a high quantity of posts that are also of a high quality, then it will take up a lot of my time. And so on...

This post was prompted by something Dave Rogers wrote a while back:
I find I have less to say about things these days. Often I feel the familiar urge to say something, but now I'm as likely to keep quiet as I am to speak up. This bothers me a little, because I've always felt it was important to speak up when you felt strongly about something. Now I'm not so sure about that.

Sometimes the urge to speak up is the result of habituated thinking, a conditioned response. Someone writes something that triggers an emotional response, certain automatic behaviors kick in, and before I know it I'm writing some kind of negative response. I can't think of a case where it did any particular good. I get to feel a bit of an adrenaline rush from the experience, and maybe a couple of people agree with me and I get a little validation; but most of the time, the target of my ire and indignation is unaffected. There is no change of opinion, no reevaluation of position. It's all energy expended to no good end, other than perhaps to stimulate the already persuaded and generate a little titillation for the folks who like to watch. I also can't recall a case when, finding myself on the receiving end, I've altered my point of view; especially if it was something I cared enough about to have an opinion that was likely to provoke that kind of response.

I suppose this is a kind of self-censorship, but I think it's a good thing. One person's self-censorship is another person's self-discipline perhaps. Just as I've learned to pay attention to what's going on inside my own mind when I'm behind the wheel, becoming a calmer and safer driver in the process, I'm learning to pay attention not just to what I write, but why I want to write it.
Despite all that I've said so far, I actually have been writing here for quite some time. Sure, I swap venues or slow down sometimes, but I have kept a relatively steady pace among them in the past few years. Dave's post made me wonder about why I want to write and what kept me writing. There are plenty of reasons, but one of the most important is that I am usually writing about things I don't know very well... and I learn from the experience. Blogging originally taught me to seek out and find things off the beaten path, Everything2 gave me an excuse to research various subjects and write about them (most of what I write there are called "factuals" - sort of like writing an encyclopedia entry), and the forums forced me to form an opinion and let it stand up to critical testing. I'm not exactly sure what it is I'm learning right now, but I'm enjoying myself.
Posted by Mark on January 20, 2004 at 08:31 PM .: link :.


End of This Day's Posts

Sunday, January 18, 2004

To the Moon!
President Bush has laid out his vision for space exploration. Reaction has mostly been lukewarm. Naturally, there are opponents and proponents, but in my mind it is a good start. That we've changed focus to include long term manned missions on the Moon and a mission to Mars is a bold enough move for now. What is difficult is that this is a program that will span several decades... and several administrations. There will be competition and distractions. To send someone to Mars on the schedule Bush has set requires a consistent will among the American electorate as well. However, given the technology currently available, it might prove to be a wise move.

A few months ago, in writing about the death of the Galileo probe, I examined the future of manned space flight and drew a historical analogy with the pyramids. I wrote:
Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.

Such projects are not without their historical equivalents. There are all sorts of theories explaining why the ancient Egyptian pyramids were built, but none are as persuasive as the idea that they were built to unify Egypt's people and cultures. At the time, almost everything was being done on a local scale. With the possible exception of various irrigation efforts that linked together several small towns, there existed no project that would encompass the whole of Egypt. Yes, an insane amount of resources were expended, but the product was truly awe-inspiring, and still is today.

Those who built the pyramids were not slaves, as is commonly thought. They were mostly farmers from the tribes along the River Nile. They depended on the yearly cycle of flooding of the Nile to enrich their fields, and during the months that that their fields were flooded, they were employed to build pyramids and temples. Why would a common farmer give his time and labor to pyramid construction? There were religious reasons, of course, and patriotic reasons as well... but there was something more. Building the pyramids created a certain sense of pride and community that had not existed before. Markings on pyramid casing stones describe those who built the pyramids. Tally marks and names of "gangs" (groups of workers) indicate a sense of pride in their workmanship and respect between workers. The camaraderie that resulted from working together on such a monumental project united tribes that once fought each other. Furthermore, the building of such an immense structure implied an intense concentration of people in a single area. This drove a need for large-scale food-storage among other social constructs. The Egyptian society that emerged from the Pyramid Age was much different from the one that preceded it (some claim that this was the emergence of the state as we now know it.)

"What mattered was not the pyramid - it was the construction of the pyramid." If the pyramid was a machine for social progress, so too can the Space program be a catalyst for our own society.

Much like the pyramids, space travel is a testament to what the human race is capable of. Sure it allows us to do research we couldn't normally do, and we can launch satellites and space-based telescopes from the shuttle (much like pyramid workers were motivated by religion and a sense of duty to their Pharaoh), but the space program also serves to do much more. Look at the Columbia crew - men, women, white, black, Indian, Israeli - working together in a courageous endeavor, doing research for the benefit of mankind, traveling somewhere where few humans have been. It brings people together in a way few endeavors can, and it inspires the young and old alike. Human beings have always dared to "boldly go where no man has gone before." Where would we be without the courageous exploration of the past five hundred years? We should continue to celebrate this most noble of human spirits, should we not?
We should, and I'm glad we're orienting ourselves in this direction. Bush's plan appeals to me because of it's pragmatism. It doesn't seek to simply fly to Mars, it seeks to leverage the Moon first. We've already been to the Moon, but it still holds much value as a destination in itself as well as a testing ground and possibly even a base from which to launch or at least support our Mars mission. Some, however, see the financial side of things a little too pragmatic:
In its financial aspects, the Bush plan also is pragmatic -- indeed, too much so. The president's proposal would increase NASA's budget very modestly in the near term, pushing more expensive tasks into the future. This approach may avoid an immediate political backlash. But it also limits the prospects for near-term technological progress. Moreover, it gives little assurance that the moon-Mars program will survive the longer haul, amid changing administrations, economic fluctuations, and competition from voracious entitlement programs.
There's that problem of keeping everyone interested and happy in the long run again, but I'm not so sure we should be too worried... yet. Wretchard draws an important distinction, we've laid out a plan to voyage to Mars - not a plan to develop the technology to do so. Efforts will be proceeding on the basis of current technology, but as Wretchard also notes in a different post, current technology may be unsuitable for the task:
Current launch costs are on the order of $8,000/lb, a number that will have to be reduced by a factor of ten for the habitation of the moon, the establishment of La Grange transfer stations or flights to Mars to be feasible. This will require technology, and perhaps even basic physics that does not even exist. Simply building bigger versions of the Saturn V will not work. That would be "like trying to upgrade Columbus?s Nina, Pinta, and Santa Maria with wings to speed up the Atlantic crossing time. A jet airliner is not a better sailing ship. It is a different thing entirely." The dream of settling Mars must await an unforseen development.
Naturally, the unforseen development is notoriously tricky, and while we must pursue alternate forms of propulsion, it would be unwise to hold off on the voyage until this development occurs. We must strike a delicate balance between the concentration on the goal and the means to achieve that goal. As Wretchard notes, this is largely dependant on timing. What is also important here is that we are able to recognize this development when it happens and that we leave our program agile enough to react effectively to this development.

Recognizing this development will prove interesting. At what point does a technology become mature enough to use for something this important? This may be relatively straightforward, but it is possible that we could jump the gun and proceed too early (or, conversely, wait too long). Once recognized, we need to be agile, by which I mean that we must develop the capacity to seamlessly adapt the current program to exploit this new development. This will prove challenging, and will no doubt require a massive increase in funding, as it will also require a certain amount of institutional agility - moving people and resources to where we need them, when we need them. Once we recognize our opportunity, we must pounce without hesitation.

It is a bold and challenging, yet judiciously pragmatic, vision that Bush has laid out, but this is only the first step. The truly important challenges are still a few years off. What is important is that we recognize and exploit any technological advances on our way to Mars, and we can only do so if we are agile enough to effectively react. Exploration of the frontiers is a part of my country's identity, and it is nice to see us proceeding along these lines again. Like the Egyptians so long ago, this mammoth project may indeed inspire a unity amongst our people. In these troubled times, that would be a welcome development. Though Europe, Japan, and China have also shown interest in such an endeavor, I, along with James Lileks, like the idea of an American being the first man on Mars:
When I think of an American astronaut on Mars, I can't imagine a face for the event. I can tell you who staffed the Apollo program, because they were drawn from a specific stratum of American life. But things have changed. Who knows who we'd send to Mars? Black pilot? White astrophysicist? A navigator whose parents came over from India in 1972? Asian female doctor? If we all saw a bulky person bounce out of the landing craft and plant the flag, we'd see that wide blank mirrored visor. Sex or creed or skin hue - we'd have no idea.

This is the quintessence of America: whatever face you'd see when the visor was raised, it wouldn't be a surprise.
Indeed.

Update 1.21.04: More here.
Posted by Mark on January 18, 2004 at 05:16 PM .: link :.


End of This Day's Posts

Sunday, December 28, 2003

On the Overloading of Information
Jonathon Delacour asks a poignant question:
who else feels overwhelmed by the volume of information we expect ourselves to absorb and process every day? And how do you manage to deal with it?
Judging from the comments, his post has obviously struck a chord with his readers, myself included. I am once again reminded of Neal Stephenson's original minimalist homepage in which he speaks of his ongoing struggle against what Linda Stone termed as "continuous partial attention," for that is the way in which modern life must be for a great deal of us.

I am often overwhelmed by a desire to consume various things - books, movies, music, etc... The subject of such things is also varied and, as such, often don't mix very well. That said, the only thing I have really found that works is to align those subjects that do mix in such a way that they overlap. This is perhaps the only reason blogging has stayed on my plate for so long: since the medium is so free-form and since I have absolute control over what I write here and when I write it, it is easy to align my interests in such a way that they overlap with my blog (i.e. I write about what interests me at the time). I have been doing so for almost three and a half years, more or less, and the blog as it now exists barely resembles what it once did. This is, in part, because my interests have shifted during that time. There was a period of about a year in which blogging was very sparse indeed, but before I tackle that, I wish to backtrack a bit.

As I mentioned, this subject has struck a chord with a great deal of people, and the most common suggestion for how to deal with such a quandry is a form of information filtering. Usually this takes the form of a rather extreme and harsh filtering system - namely removing one source of information entirely. Delacour speaks of a friend who only recently bought a television and vcr, and even then he only did so so that his daughters could watch videos a few times a week. The complete removal of one source of information seems awfully drastic to me, though I suppose I've done so from time to time. For about a year, I had not bought or sought out any new music, only recently emerging from this out of boredom. It was a conscious decision to remove music from my sphere of learning, though I continued to listen to and very much enjoy music. I simply didn't understand music the way I understood film or literature (inasmuch as I understand either of those) and didn't want to burden myself overinterpreting yet another medium. Even as it stands now, I'm not too concerned over what I'm listening too, as long as it keeps my attention during a rather long commute.

Some time ago, I used to blog a lot more often than I do now. And more than that, I used to read a great deal of blogs, especially new blogs (or at least blogs that were new to me). Eventually this had the effect of inducing a sort of ADD in me. I consumed way too many things way too quickly and I became very judgemental and dismissive. There were so many blogs that I scanned (I couldn't actually read them, that would take too long for marginal gain) that this ADD began to spread across my life. I could no longer sit down and just read a book, even a novel.

Eventually, I recognized this, took a bit of a break from blogging, and attempted to correct, with some success. I have since returned to blogging, albeit at a slower pace, and have taken measures against falling into that same trap, though only with limited success. I have come to the conclusion that I can only do one major internet endeavor at a time. During the period of slow blogging, I turned my attention towards Everything 2 (a sort of online collaborative encyclopedia), but I have found that as I returned to blogging, I could not find time for E2, unless they somehow overlapped (as they do, from time to time). Likewise, I cannot devote much time to discussion of various subjects at various forums if I am blogging or noding (as posting at E2 is called). Delacour's description of his own quandry is somewhat accurate in my case as well:
Self-employment, a constant Internet connection, a weblog, and a mildly addictive personality turn out to be a killer combination-even for someone who no longer feels compelled to post regularly, let alone every day.
So the short answer to Delacour's question of how do people deal with information overload is of course filtering. It is the manner and degree to which we filter that is important. And of course it must be said that any filtering system which you set up must be dynamic - it must change as you change and the world changes. It is a challenge to find the right balance, and it is also a challenge to keep that balance.

***

An interesting post-script to this is that I ran across Delacour's post several weeks ago, and am only coming to post about it today. Make of that what you will.

In any case, I'd like to turn my attention to another of Delacour's posts, titled I'll link to whoever he's linking to, in which he talks a lot about what drives people to link other blogs on their blog. It is an exceptional analysis and well worth reading in it's entirety. At one point, he points to "six principles of persuasion" (as defined by a Psychology professor in the context of cult recruitment) and applies those principles to weblogs and blogrolls with some success. This has prompted some thought on my part, and I have decided to update the blogroll. As you might guess, a number of the six principles of persuasion are at work in my blogroll, but I would note that the most accurate in my case are "liking" (as in, the reason all of those links are there is because I like them and read them regularly - indeed, it is almost there out of a pragmatic want of having the most common sites I visit linked from one place) and "Commitment and Consistency." By far the least important is the "Social Proof" principle which states that "In a given situation, our view of whether a particular behavior is correct or not is directly proportional to the number of other people we see performing that behaviour" or, applied to blogs, "If all those other people have X on their blogrolls, then he definitely should be on my blogroll."

In fact, I had updated the blogroll somewhat recently already. One of the blogs I added then was the Belmont Club, which has enjoyed a certain amount of noteriety lately, thanks in part to Steven Den Beste (who, interestingly enough, had promted Delacour's post about linking in the first place). So Belmont Club went from a relatively obscure excellent blog to a blog that is well known and now highly linked to. Believe it or not, this has weighed unfavorably upon my decision to keep Belmont Club on the blogroll. I have opted to do so for now because my "liking" that blog far outweighs my distaste for "social proof." In any case, the blogroll will be updated shortly, with but a few new blogs...

I find both of these subjects (information overload and linking) to be interesting, so I may spend some time later this week hashing out a little more about both subjects... or perhaps not - perhaps some other interest will gain favor in my court. We shall see, I suppose.
Posted by Mark on December 28, 2003 at 11:17 AM .: link :.


End of This Day's Posts

Wednesday, December 03, 2003

Is the Christmas Tree Christian?
The Winter Solstice occurs when your hemisphere is leaning farthest away from the sun (because of the tilted axis of the earth's rotation), and thus this is the time of the year when daylight is the shortest and the sun has its lowest arc in the sky.

No one is really sure when exactly it happened (or who started the idea), but this period of time eventually took on an obvious symbolic meaning to human beings. Many geographically diverse cultures throughout history have recognized the winter solstice is as a turning point, a return of the sun. Solstice celebrations and ceremonies were common, sometimes performed out of a fear that the failing light of the sun would never return unless humans demonstrated their worth through celebration or vigil.

It has been claimed that the Mesopotamians were among the first to celebrate the winter solstice with a 12 day festival of renewal, designed to help the god Marduk tame the monsters of chaos for one more year. Other theories go as far back as 10,000 years. More recently, the Romans celebrated the winter solstice with a fest called Saturnalia in honor of Saturn, the god of agriculture.

Integral to many of these celebrations were plants and trees that remained green all year. Evergreens reminded them of all the green plants that would grow again when the sun returned; they symbolized the solstice and the triumph of life over death.

In the early days of Christianity, the birth of Christ was not celebrated (instead Easter, was and possibly still is the main holiday of Christianity). In the fourth century, the Church decided to make the birth of Christ a holiday to be celebrated. There was only one problem - the Bible makes no mention of when Christ was born. Although there was some evidence to draw from, the Church chose to celebrate Christmas on December 25. It is believed that this date was chosen to coincide with traditional winter solstice festivals such as the Roman pagan Saturnalia festival in the hopes that Christmas would be more popularly embraced by the people of the world. And embraced it was, but the Church found that as the holiday spread, their choice to hold Christmas at the same time as solstice celebrations did not allow the Church to dictate how the holiday was celebrated. And so many of the pagan traditions of the solstice survived during the next millenia, even though pagan religions had largely given way to Christianity.

And so the importance of evergreens in these celebrations continued. The use of the Christmas tree, as we now know it, is generally credited to sixteenth century Germans, specifically the Protestant-reformer Martin Luther, who is thought to be the first to added lighted candles to a tree.

While the Germans found a certain significance in the pagan traditions concerning evergreens, it was not a universally held belief. For instance, the Christmas tree did not gain traction in America until the mid-nineteenth century. Up until then, they were generally seen as pagan symbols and mocked by New England Puritans. But the tradition gained traction thanks to German settlers in Pennsylvania (among others) and increasing secularization of the holiday in America. In the past century, the Christmas tree has gained in popularity, as more and more people adopted the traditon of displaying a decorated evergreen in their home. After all this time, Christmas trees have become an American tradition.

There has been a lot of controversy lately concerning the presence (or, I suppose, the removal and thus absence) of Christmas trees in schools. Personally, I don't see what is so controversial about it, as a Christmas tree is more of a secular, rather than religious, symbol. Joshua Claybourn quotes the Supreme Court thusly:
"The Christmas tree, unlike the menorah, is not itself a religious symbol. Although Christmas trees once carried religious connotations, today they typify the secular celebration of Christmas." Allegheny v. American Civil Liberties Union Greater Pittsburgh Chapter, 492 U.S. 573, 109 S.Ct. 3086.
It does not represent a religious idea, but rather the idea of renewal that accompanied the winter solstice. One can associate Christian ideas with the tree, as Martin Luther did so long ago, but that does not make it inherently Christian. Indeed, I think of the entire Christmas holiday as more secular than not, though I guess my being Christian might have something to do with it. This idea is worth further exploring in the future, so expect more posts on the historical Christmas.

Update: Patrick Belton notes the strange correlations between Christmas Trees and Prostitution in Virginia.
Posted by Mark on December 03, 2003 at 11:31 PM .: link :.


End of This Day's Posts

Thursday, November 27, 2003

A Thanksgiving Cuisine Proposal
Last night I dined on fresh Sushi and washed it down with a generous portion of Hennepin (a fine beer, that). I was thinking of today's inevitable gorging and I had a brilliant idea.

Turkey Sushi.

If I had any photoshopping skillz, I'd have a really funny picture of a piece of sushi with a cartoon turkey head sticking out of it.

Anyway, the only thing I can't figure out is the seaweed. I'm not sure how that would go with this. Then again, throw in a sliver of gelatinous cranberry sauce with the cold turkey and you have an even better turkey roll. This is a huge market we're missing out on here! I'll be a millionaire in no time. Happy Thanksgiving all!
Posted by Mark on November 27, 2003 at 10:38 AM .: link :.


End of This Day's Posts

Wednesday, October 08, 2003

Annals of the Mathematically Challenged
Fritz Schranck relates a story of a mathematically challenged fast-food cashier whose register was broken and couldn't figure out how to make change (the customer had given the cashier $10 for a bill of $8.95). He goes on to say that he's heard these sorts of stories before, but he'd never seen it for himself untl then...

But I think I've got him beat. A few years ago, I happened to be perusing some titles at the 'tique, when someone asked the sales clerk what time it was. He picked up a watch, and a confused frown spread across his face. He then grinned, and grabbed a calculater from under the counter and began punching in numbers. At this point he responded to the customer's quizzical look by explaining "The watch is on military time." It was 1400 hours (aka 2:00 p.m.)
Posted by Mark on October 08, 2003 at 11:28 PM .: link :.


End of This Day's Posts

Monday, September 08, 2003

My God! It's full of stars!
What Galileo Saw by Michael Benson : A great New Yorker article on the remarkable success of the Galileo probe. James Grimmelmann provides some fantastic commentary:
Launched fifteen years ago with technology that was a decade out of date at the time, Galileo discovered the first extraterrestrial ocean, holds the record for most flybys of planets and moons, pointed out a dual star system, and told us about nine more moons of Jupiter.

Galileo's story is the story of improvisational engineering at its best. When its main 134 KBps antenna failed to open, NASA engineers decided to have it send back images using its puny 10bps antenna. 10 bits per second! 10!

To fit images over that narrow a channel, they needed to teach Galileo some of the tricks we've learned about data compression in the last few decades. And to teach an old satellite new tricks, they needed to upgrade its entire software package. Considering that upgrading your OS rarely goes right here on Earth, pulling off a half-billion-mile remote install is pretty impressive.
And the brilliance doesn't end there:
As if that wasn't enough hacker brilliance, design changes in the wake of the Challenger explosion completely ruled out the original idea of just sending Galileo out to Mars and slingshotting towards Jupiter. Instead, two Ed Harris characters at NASA figured out a triple bank shot -- a Venus flyby, followed by two Earth flybys two years apart -- to get it out to Jupiter. NASA has come in for an awful lot of criticism lately, but there are still some things they do amazingly well.
Score another one for NASA (while you're at it, give Grimmelmann a few points for the Ed Harris reference). Who says NASA can't do anything right anymore? Grimmelmann observes:
The Galileo story points out, I think, that the problem is not that NASA is messed-up, but that manned space flight is messed-up.
...
Manned spaceflight is, in the Ursula K. LeGuin sense, perverse. It's an act of pure conspicuous waste, like eating fifty hotdogs or memorizing ten thousand digits of pi. We do it precisely because it is difficult verging on insane.
Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.

Such projects are not without their historical equivalents. There are all sorts of theories explaining why the ancient Egyptian pyramids were built, but none are as persuasive as the idea that they were built to unify Egypt's people and cultures. At the time, almost everything was being done on a local scale. With the possible exception of various irrigation efforts that linked together several small towns, there existed no project that would encompass the whole of Egypt. Yes, an insane amount of resources were expended, but the product was truly awe-inspiring, and still is today.

Those who built the pyramids were not slaves, as is commonly thought. They were mostly farmers from the tribes along the River Nile. They depended on the yearly cycle of flooding of the Nile to enrich their fields, and during the months that that their fields were flooded, they were employed to build pyramids and temples. Why would a common farmer give his time and labor to pyramid construction? There were religious reasons, of course, and patriotic reasons as well... but there was something more. Building the pyramids created a certain sense of pride and community that had not existed before. Markings on pyramid casing stones describe those who built the pyramids. Tally marks and names of "gangs" (groups of workers) indicate a sense of pride in their workmanship and respect between workers. The camaraderie that resulted from working together on such a monumental project united tribes that once fought each other. Furthermore, the building of such an immense structure implied an intense concentration of people in a single area. This drove a need for large-scale food-storage among other social constructs. The Egyptian society that emerged from the Pyramid Age was much different from the one that preceded it (some claim that this was the emergance of the state as we now know it.)

"What mattered was not the pyramid - it was the construction of the pyramid." If the pyramid was a machine for social progress, so too can the Space program be a catalyst for our own society.

Much like the pyramids, space travel is a testament to what the human race is capable of. Sure it allows us to do research we couldn't normally do, and we can launch satellites and space-based telescopes from the shuttle (much like pyramid workers were motivated by religion and a sense of duty to their Pharaoh), but the space program also serves to do much more. Look at the Columbia crew - men, women, white, black, Indian, Israeli - working together in a courageous endeavor, doing research for the benefit of mankind, traveling somewhere where few humans have been. It brings people together in a way few endeavors can, and it inspires the young and old alike. Human beings have always dared to "boldly go where no man has gone before." Where would we be without the courageous exploration of the past five hundred years? We should continue to celebrate this most noble of human spirits, should we not?

In the mean time, Galileo is nearing its end. On September 21st, around 3 p.m. EST, Galileo will be vaporized as it plummets toward Jupiter's atmosphere, sending back whatever data it still can. This planned destruction is exactly what has been planned for Galileo; the answer to an intriguing ethical dilemma.
In 1996, Galileo conducted the first of eight close flybys of Europa, producing breathtaking pictures of its surface, which suggested that the moon has an immense ocean hidden beneath its frozen crust. These images have led to vociferous scientific debate about the prospects for life there; as a result, NASA officials decided that it was necessary to avoid the possibility of seeding Europa with alien life-forms.
I had never really given thought to the idea that one of our space probes could "infect" another planet with our "alien" life-forms, though it does make perfect sense. Reaction to the decision among those who worked on Galileo is mixed, most recognizing the rationale, but not wanting to let go anyway (understandable, I guess)...

For more on the pyramids, check out this paper by Marcell Graeff. The information he referenced that I used in this article came primarily from Kurt Mendelssohn's book The Riddle of the Pyramids.

Update 9.25.03 - Steven Den Beste has posted an excellent piece on the Galileo mission and more...
Posted by Mark on September 08, 2003 at 11:06 PM .: link :.


End of This Day's Posts

Wednesday, August 27, 2003

Come Sail Away
Cruises really are wonderful vacations. I just returned from one, so, in an effort to induce massive jealosy in my readers, I figured I'd give a rundown of all the glorious events which occurred during the past week. I went on a cruise to Bermuda on the Celebrity line a few years back, so I'll be using that as a comparison. This time, I went to the Southern Caribbean on the Royal Caribbean line.

Getting There: The ship sails out of San Juan on Sunday, so you'll need to arrange a flight (uh, unless you're Puerto Rican, I guess), with all the shiny happy security details that implies in the post 9/11 airline world (it also jacks up the price of the overall vacation a little - my cruise to Bermuda left out of New York and so I didn't need to fly). We decided to go early and spend Saturday in San Juan. Given that we were staying at the Ritz-Carlton, this was a most pleasant experience and an excellent start to the vacation. I would highly recommend looking into this option as it was surprisingly inexpensive, and it really is a top notch resort with a fantastic private beach, a huge pool (which was great way to wash off sand), a nice little spa (which I didn't use, but looked great) and some good dining options (I had some Sushi, and was much pleased).

The Ship: Our ship was called the Adventure of the Seas and it was truly awesome (in every sense of that word). All the standard cruise-ship amenities are there: shuffleboard, food and drinks around every corner, pools, showrooms etc... but there are also quite a few uncruise-like activities such as a roller blading track, miniature golf course, ice skating rink, and rock climbing wall. There is this thing called the Royal Promenade, which is a sort of main-street of the ship, with a bunch of shops, bars and cafes (some of which are thankfully open all night). There's a Johnny Rocket's on board as well, just in case you were in the mood for a retro burger joint.

Food: The food was excellent. The main dining room was modeled after the Titanic's dining room, with extravagent settings and twisty staircases. For those who have never been on a cruise its difficult to explain just how great the dinners are. There is a different menu every night (each one has a healthy choice and a vegetarian choice as well, in case you were worried:P) and if you are ever torn between ordering two appetizers or entrees or deserts, they'll gladly bring them both out for you. Generally, we only ate dinner there (though I did manage a few lunches, which were surprisingly good), breakfast and lunch were had at the Windjammer Cafe and Caribbean Grill, a buffet that is usually open and provides a low-key alternative to the formality of the main dining room (I never did that though, as I enjoyed the main dining room). Celebrity is known for its superb dining, and Royal Caribbean did a good job but came up just a little bit short (still excellent though).

Entertainment: There is always something to do on a cruise ship. Always. Every day, you get an itinerary of all the things that are going on that day, and you've usually got a lot of options. Every night there is a show in the theater (some nights, there is an Ice Show, which is especially interesting when the ship is moving). Generally, though, I found myself in the Duck and the Dog British pub, doing stuff like this (for the uninitiated, that thing we're drinking is what's known as an Irish Carbomb). There was a guy playing guitar there every night, and he was awesome (his name was Mark O'Bitz, I can't find anything about him on the net though...). He played all week, and pretty much the same people came every night, so by the end of the week we were all having a blast. A couple of the passengers even got up and sang a song or two. The song that ended up being the cruise's theme was Come Sail Away - one of the passengers always got up and sang it, and he was absolutely marvelous. The whole bar got into it. It was great!

Ports: We docked at 5 ports during the week:
  • St. Thomas: Nice island, good beaches, and cheap booze. It was raining a little bit on this day, but it was still a good time.
  • St. Martin: One of the supposed great things about a cruise is shopping. Generally, you can get certain items down there much cheaper than you could back home, and St. Martin is apparently known for great shopping. The big items that everyone seemed to be looking for were cameras and watches, both of which were "cheap" (I guess it would be better to say severely discounted, as a $600 Movado watch that normally sells for $1400 is a great deal, but still way too much for a watch imho). Nice beaches too (as if that's a surprise).
  • Antigua: Another staple of Caribbean islands is the amount of harassment you encounter just walking around town. You can't walk two inches without being asked if you need a cab (this was the same in St. Martin as well, but it was worse in Antigua). We ended up getting one good driver, who was funny as hell. He had these custom horns on his car, so when he was driving along he would press them and it would say "MOVE OVER!" really loud. Pedestrians would turn and look quizzically, and some even moved out of the way. It was funny. At the beach, some guy with aloe plants started harassing a lady friend of ours and some random cab driver tried to act like he worked for the beach and charged us for chairs (which were free). We also did a snorkeling thing here, which was nice... I met an Air-Force guy who had just gotten back from Iraq there, and I promised to buy him a drink later. He was very grateful and he said I was one of many who had offered. I've heard a lot of good things about Antigua, and it really was a great island, but I think we just hit a bit of bad luck with the locals...
  • St. Lucia: It was raining a lot when we got there, so I ended up not doing a lot. A few friends took a bus tour, and they said it was a beautiful island, but they really need to build some tunnels and straight roads. Apparently, they filmed one of the Superman movies here, though I couldn't figure out which one or what scenes...
  • Barbados: This ended up being our favorite of the islands. Its a beautiful island, and the locals weren't nearly as annoying as they were in other places. We went to Malibu (a beach where they make the infamous rum) which was awesome (despite a run in with the Barbadon Coast Guard), and we also went on the Jolly Roger Pirate Cruise. The Jolly Roger excursion is what is called a "booze cruise" as they immediately start serving rum punch, and by the end, I was feeling pretty darn good. The Jolly Roger is a fairly common excursion, as it was on several of the islands we visited. If that's your bag, I recommend it (it seemed to be a lot more crowded on Antigua, but I liked that our ship wasn't bursting with drunk people)
Again, Barbados was almost everyone's favorite island, but they were all a lot of fun. The only bad thing about the ports was that we were only at each one for one day; there is so much to do down there, but you had to be back on the boat by 5 pm. The Bermuda cruise was nice because you stayed at port for at least a day and a half, so you could get off the boat at night or even see the sunset at the beach (rather than at sea, which is still beautiful).

BINGO and Degenerate Gambling: Another cruise staple: BINGO! Alas, despite playing several sessions of BINGO, I did not win. I did, however, win a raffle! I got my choice of 6 paintings. I ended up choosing a painting by Anatole Krasnyansky. Its called Venice Yellow Sunset.

I like to gamble, and I finished almost every night on the cruise at the Casino. I ended up doing surprisingly well, though I think I might be developing a problem (just kidding, I was shocked at my restraint during the week. Whenever I was up by a certain amount, I walked, which is only way you can win at gambling in a Casino). I played a lot of blackjack, but my game of choice ended up being Roulette, which I had never played before. It was a lot of fun, but it is way too easy to drop lots of money...

Returning Home: Not much to say about the return, other than the airport security in Puerto Rico was very impressive. They were quick, efficient, and thourough (I even had to run my shoes through the x-ray machine with my carry-on).

So there you have it. I could probably go on and on and on about other things I loved about this cruise, but I'm not that cruel. If you have a vacation coming up, check out the cruise option (unless you get sea-sick).

Update 11.23.03 - Added a link to the painting. Also check out the comments for the profound effect Mark O'Bitz has had on many people's lives!
Posted by Mark on August 27, 2003 at 11:11 PM .: link :.


End of This Day's Posts

Friday, August 08, 2003

Villainous Brits!
A few weeks ago, the regular weather guy on the radio was sick and a British meteorologist filled in. And damned if I didn't think it was the best weather forecast I'd ever heard! The report, which called for rain on a weekend in which I was traveling, turned out to be completely inaccurate, much to my surprise. I really shouldn't have been surprised, though. I know full well the limitations of meteorology, and weather reports can't be that accurate. Truth be told, I subcounsciously placed a higher value on the weather report because it was delivered in a British accent. Its not his fault, he can predict the weather no better than anyone else in the world, but the British accent carries with it an intellectual stereotype; when I hear one, I automatically associate it with intelligence.

Which brings me to John Patterson's recent article in the Guardian in which he laments the inevitable placement of British characters and actors in the villainous roles (while all the cheeky Yanks get the heroic roles):
Meanwhile, in Hollywood and London, the movie version of the special relationship has long played itself out in like manner. Our cut-price actors come over and do their dirty work, as villains and baddies and psychopaths, even American ones, while the cream of their prohibitively expensive acting talent Concordes it over the pond to steal the lion's share of our heroic roles. Either way, we lose.
One could be curious why Patterson is so upset that American actors get the heroic parts in American movies, but even if you ignore that, Patterson is stretching it pretty thin.

As Steven Den Beste notes, this theory doesn't go too far in explaining James Bond or Spy Kids. Never mind that the Next Generation captain of the starship Enterprise was a Brit (playing a Frenchman, no less). Ian McKellen plays Gandalf; Ewan McGregor plays Obi Wan Kenobi. The list goes on and on.

All that aside, however, it is true that British actors and characters often do portray the villain. It may even be as lopsided as Patterson contends, but the notion that such a thing implies some sort of deeply-rooted American contempt for the British is a bit off.

As anyone familiar with film will tell you, the villain needs to be so much more than just vile, wicked or depraved to be convincing. A villainous dolt won't create any tension with the audience, you need someone with brains or nobility. Ever notice how educated villains are? Indeed, there seem to a preponderance of doctors that become supervillains (Dr. Demento, Dr. Octopus, Dr. Doom, Dr. Evil, Dr. Frankenstien, Dr. No, Dr. Sardonicus, Dr. Strangelove, etc...) - does this reflect an antipathy towards doctors? The abundance of British villains is no more odd than the abundance of doctors. As my little episode with the weatherman shows, when Americans hear a British accent, they hear intelligence. (This also explains the Gladiator case in which Joaquin Phoenix, who is Puerto Rican by the way, puts on a veiled British accent.)

The very best villains are the ones that are honorable, the ones with whom the audience can sympathize. Once again, the American assumption of British honor lends a certain depth and complexity to a character that is difficult to pull off otherwise. Who was the more engaging villain in X-Men, Magneto or Sabretooth? Obviously, the answer is Magneto, played superbly by British actor Ian McKellen. Having endured Nazi death camps as a child, he's not bent on domination of the world, he's attempting to avoid living through a second holocaust. He's not a megalomaniac, and his motivation strikes a chord with the audience. Sabretooth, on the other hand, is a hulking but pea-brained menace who contributes little to the conflict (much to the dismay of fans of the comic, in which Sabertooth is apparently quite shrewd).

Such characters are challenging. It's difficult to portray a villain as both evil and brilliant, sleazy and funny, moving and tragic. In fact, it is because of the complexity of this duality that villains are often the most interesting characters. That British actors are often chosen to do so is a testament to their capability and talent.

Some would attribute this to the training of the stage that is much less common in the U.S. British actors can do a daring and audacious performance while still fitting into an ensemble. It's also worth noting that many British actors are relatively unknown outside of the UK. Since they are capable of performing such a difficult role, and since they are unfamiliar to US audiences, it makes the films more interesting.

In the end, there's really very little that Patterson has to complain about, especially when he tries to port this issue over to politics. While a case may be made that there are a lot of British villains in movies (and there are plenty of villains that aren't), that doesn't mean there is anything malicious behind it; indeed, depending on how you look at it, it could be considered a complement that British culture lends itself to the complexity and intelligence required for a good villain we all love to hate (and hate to love). [thanks to USS Clueless for the Guardian article]
Posted by Mark on August 08, 2003 at 09:36 AM .: link :.


End of This Day's Posts

Friday, July 11, 2003

Duditivity
Dude, Where's My Dude? Dudelicious Dissection, From Sontag to Spicoli by Ron Rosenbaum : Dude, this is some seriously funny reading. The complete history of Dude, from its humble origins as a "aesthetic craze" in New York, circa 1883, to Dude, Where's My Car? in 2000.
Everybody thinks "dude ranch" came first and was somehow the origin. But whence came the dude in "dude ranch"? Before the dude-ranch dude there was dude as dandy, the dude as an urban aesthete; it was the urbanity of dude that made the dude-ranch dude dude-ish.
This is so stupid, but its a smart stupid. Almost Pynchonian, really. Seriously, its a surprisingly complete article, worth reading if only to experience the whopping 160 or so occurrences of the term "Dude" or its derivatives. [via Ipse Dixit - Thanks Dude!]

Update: Unrelated, but interesting: A brief Googling of Pynchon and Dude turned up this article, also by Rosenbaum, about Pynchon and Phone Phreaking.
Posted by Mark on July 11, 2003 at 12:40 PM .: link :.


End of This Day's Posts

Sunday, July 06, 2003

Trivial Pursuit
I was playing Trivial Pursuit the other day, and I was again struck by the victimology that always seems to play out during such a game. "You get all the easy questions! Its no fair!" At times, that's probably true, but over the course of an entire game, its a little less clear who is really getting the short end of the stick. Ignoring for a moment what questions are considered easy (if I answer a question immediately after it was asked, was it an easy question?), this sort of victimology is a difficult thing to avoid. I definitely feel that way sometimes, but I'm beginning to come around. Besides, in the end there's really nothing you can do about it. Nobody said life would be fair.

Obviously, this doesn't just affect trivia games either. My first programming class in college was extremely difficult. The professor was a stickler for things like commenting and algorithmic efficiency (something we didn't even know how to measure yet), but he never told us these things. When we did an assignment, we'd get it back, all marked up to hell. "But it works! It does exactly what you said you wanted it to do!" Obviously, everyone hated this man, myself included. Only two As were given out in his class that semester, and I ended up with a B (and I wasn't too happy about that). Classes taught by other professors, on the other hand, were much simpler. However, during the course of the next year or so, it became abundantly aware to me that I learned a hell of a lot more than everyone else, so when it came time to buckle down and write an operating system (!) I ended up not having as much trouble as many other students.

It didn't work that way for everyone in the class. While I hated the professor, I never stopped trying. I ended up learning from my mistakes, while others bitched and moaned about how unfair it was. Ironically, even those in the "easy" classes were complaining about how difficult the course was.

So now its occurring to me that everyone feels like a victim. Take a little trip around the blogosphere and you'll see lots of protestations about the "liberal media". Then I head over to 4degreez and hear all the complaints about the "conservative media". Well, which is it? With respect to the media, everyone is a victim. Why is that?

I see both, all the time. The truth is that there are tons of both liberal and conservative media sources. You just have to know which is which and take them with the appropriate grains of salt. Yes, its frustrating, I know, but playing the victim leads to ruin and it prevents you from honing your arguments, making them stronger and more resistant to criticism.

Don't take this to mean that we should not be criticising the media. We should be, emphatically. Blogs are great for this in that they are fact-checking everyone and their mother, and will often print retractions of their own mistakes quickly and efficiently (alas, not all blogs are that trustworthy).

And really, the media could be doing a whole lot more to help us than it currently does, especially on the internet. On the internet, there are no compelling spacial boundries, no character limits. There is no reason complete interview transcripts or offical documents can't be posted along with an article. Hell, its the internet, link to other sources and even criticisms. Let us make up our own mind! Traditional media is awful at this, though I have seen at least some examples of this sort of thing around. The only "problem" with that is that the media could no longer misquote people on a whim or creatively skew statistics, simply because they don't like someone or something (if I had a dime for every time Wolfowitz was misquoted, I'd be a rich man. I know this because the DoD posts full transcripts of briefings, interviews, and press conferences on their site, much to the dismay of the media, who are now getting caught). There are tons of great ideas, none of which would be all that difficult to implement from a technical standpoint.

The media has lots of work to do, and with the increase of informational transparency in our society, they better get going. Soon. In the mean time, if you're conservative, look at the liberal media as an opportunity for strengthening your arguments. Don't bitch and whine about the liberal media and dismiss it out of hand. If your liberal, don't get pissed off that the media isn't repeating whatever new contradictory conspiracy theory you've concocted and take a page out of the bloggers book. Fact-check their asses!
Posted by Mark on July 06, 2003 at 01:21 PM .: link :.


End of This Day's Posts

Thursday, June 12, 2003

Strange Days
"You know the world is going crazy when the best rapper is a white guy, the best golfer is a black guy, the Swiss hold the America's Cup, France is accusing the U. S. of arrogance, and Germany doesn't want to go to war" - NothingLasts4ever

What a quote, what a world!
Posted by Mark on June 12, 2003 at 11:22 AM .: link :.


End of This Day's Posts

Sunday, May 11, 2003

To hit or not to hit, that is the question
Gambling is a strange vice. Anyone with a brain in their head knows the games are rigged in the Casino's favor, and anyone with a knowledge of Mathematics knows how thoroughly the odds are in the Casino's favor. But that doesn't stop people from dropping their paychecks in a few hours. I stopped by Atlantic City this weekend, and I played some blackjack. The swings are amazing. I only played for about an hour, but I am always fascinated by the others at the table and even my own reactions.

I don't play to win, rather, I don't expect to win, but I like to gamble. I like having a stack of chips in front of me, I like the sounds and the smells and the gaudy flashing lights (I like the deliberately structured chaos of the Casino). I allot myself a fixed budget for the night, and it usually adds up to approximately what I'd spend on a good night out. People watching isn't really my thing, but its hard not to enjoy it at a Casino, and that's something I spend a lot of time doing. Some people have the strangest superstitions and beliefs, and its fun to step back and observe them at work. Even though I know the statistical underpinnings of how gambling works at a Casino, I even find myself thinking the same superstitious stuff because its only natural.

For instance, a lot of people think that if a player sitting at their table makes incorrect playing actions, it decreases their advantage. Statistically, this is not true, but when that guy sat down at third-base and started hitting on his 16 when the dealer was showing a 5, you better believe a lot of people got upset. In reality, that moron's actions have just as much a chance of helping other players as hurting them, but that's no consolation to someone who lost a hundred bucks in the short time since that guy sat down. Similarly, many people have progressive betting strategies that are "guaranteed" to win. Except, you know, they don't actually work (unless they're based on counting, but that's another story).

The odds in AC for Blackjack give the House an edge of about 0.44%. That doesn't sound like much, but its plenty for the Casino, because they have an unfair advantage even if the odds were dead even. Don't forget, the Casino has deep pockets, and you don't. In order to take advantage of a prosperous swing in the game, you need to weather the House's streaks. If you're playing with $1000, you might be able to swing it, but don't forget, the Casino is playing with millions of dollars. They will break your bank if you spend enough time there, even if they didn't have the statistical advantage. That's why you get comps when you win. They're trying to keep you there so as to bring you closer to the statistical curve.

The only way you can really win at Blackjack is to have the luck of a quick streak and the willpower to stop while you're up (as I noted before, if you're up a lot, the Casino will do their best to keep you playing), but that's a fragile system - you can't count on that, though it will happen sometimes. The only way to consistently win at Blackjack is to count cards. That can give you the advantage of around 1% (more on certain hands, less on others) - depending on the House rules. This isn't Rain Man - you aren't keeping track of every card that comes out of the deck (rather, you're keeping a relative score of high value cards to low cards), and you don't get an automatic winning edge on every hand. Depending on the count, the dealer can still play consistently better than you - but the dealer can't double down or split, and they only get even money for Blackjack. That's where the advantage comes.

Of course, you have to have a pretty big bankroll to compensate for the Casino's natural "deep pockets" advantage, and you'll need to spend hundreds of hours practicing at home. Blackjack is fast and you need to be able to keep a running tab of the high/low card ratio (and you need to do some other calculations to get the true count), all the while you must appear to be playing normally, talking with the other players, dealing with the deliberately designed chaotic distractions of the Casino and generally trying not to come off as someone who is intensely concentrating. No small feat.

I'm not sure if that'd take all the fun out of it, not to mention draw the Casino's attention on me (which can't be fun), but it would be an interesting talent to have and its a must if you want to win. At the very least, it's a good idea to get the basic strategy down. Do that and you'll be better than most of the people out there (even if you just memorize the Hard Totals table, you'll be in good shape).
Posted by Mark on May 11, 2003 at 09:12 PM .: link :.


End of This Day's Posts

Tuesday, April 08, 2003

Living in Historic Times
"Wars have a way of overriding the days just before them. In the looking back, there is such noise and gravity. But we are conditioned to forget. So that the war may have more importance, yes, but still... isn't the hidden machinery easier to see in the days leading up to the event? There are arrangements, things to be expedited... and often the edges are apt to lift, briefly, and we see things we were not meant to...." - Thomas Pynchon, Gravity's Rainbow, page 474.
Human beings tend to remember an uncompleted task better than a completed one, ostensibly because an uncompleted task has no closure, and thus our mind must continually work to acheive closure. This is a drastic oversimplification of what pyschologists call the Zeigarnik effect, and you can observe it in action in schools and restaruants across the world. Make a student take the same test he took the day before, and he'll probably do much worse. There are all sorts of similar psychological theories and, depending on how liberally you apply them, you observe them in action all over the place.

Which makes me wonder, how will we remember this war twenty years from now? How will Bush be perceived? If things continue to go as well as they have, will history remember that this war was immensely unpopular in the world or the seemingly conflicting and ambigious motives of the US? Bush and the "Coalition of the Willing" experienced several setbacks in the months leading up to this war, but now, in hindsight, they seem small and insignificant. One of the few things I like about Bush is the way he reacted to these small setbacks. He barely flinched and kept his eye firmly on his long view. Perhaps an application of the Zeigarnik effect on a historical level, Bush recognized that people will only remember how something ends, not the events, setbacks and all, that led us there. We've had a spectacularly successful start, now we just need to make sure it ends right... [Pynchon quote from War Words]
Posted by Mark on April 08, 2003 at 08:55 PM .: link :.


End of This Day's Posts

Thursday, August 29, 2002

Saturn Ascends
James Grimmelmann has revitalized the Laboratorium. He started blogging again, and since I mostly missed out on it last time, that makes me happy because its a pleasure to read his stuff. For the past year or so, he's been experimenting with various forms of writing and new web tools (that dam twiki-web thing that doesn't seem to work all that well) but has largely neglected the site with updates coming only spuratically. It looks as if he's going to stick to it this time, though (which is more than I could say for myself!) Do yourself a favour and check him out.

The "return of saturn," is a popular theme derived from astrology and is often used in literature (among other art, such as music) as a symbol for a period of change in a person's life. Metaphorically speaking, you could say that James' Saturn is returning. I'm not sure how old he is, but this may even be true in the asrological sense, not that it would really matter. In any case, I was thinking about that idea when I came across James' revision, so that's why I named the post "Saturn Ascends". And you know how much I love cataloging lifes little footnotes...
Posted by Mark on August 29, 2002 at 08:51 PM .: link :.


End of This Day's Posts

Monday, August 05, 2002

Kryptonian Love Problems
Man of Steel, Woman of Kleenex by Larry Niven : A funny and very graphic (you were warned) description of the physiological problems Superman would face if he were to attempt to procreate. Niven is best known for his Science Fiction novels, most notably Ringworld (and its sequels), but he shows a biting sense of humour in this essay... Also, as an interesting side note, the influence of this article can be witnessed in Kevin Smith's Mallrats:
Brodie: It's impossible, Lois could never have Superman's baby. Do you think her fallopian tubes could handle his sperm? I guarantee he blows a load like a shotgun, right through her back. What about her womb, you think it's strong enough to carry his child?

TS: Sure, why not?

Brodie: He's an alien for Christssake. His Kryptonian biological makeup is enhanced by Earth's yellow sun. If Lois gets a tan the kid could kick right through her stomach. Only someone like Wonderwoman has a strong enough uterus to carry his kid. The only way he could bang regular chicks is with a Kryptonite condom, but that would kill him.
When compared to Niven's article, the only new thing is the kryptonite condom bit, but its funny nonetheless... Still, Niven's article is great...[thanks to Jim Miller]
Posted by Mark on August 05, 2002 at 06:12 PM .: link :.


End of This Day's Posts

Monday, July 22, 2002

Surely You're Joking, Mr. Feynman!
Cargo Cult Science by Richard Feynman : Feynman's classic scathing critique of the pseudo-science typified by the "cargo cult" of South Sea islanders:
In the South Seas there is a cargo cult of people. During the war they saw airplanes with lots of good materials, and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head to headphones and bars of bamboo sticking out like antennas--he's the controller--and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land.
You see this sort of thing often, usually done purposely in order to advance a certain agenda. As Feynman notes, one of the classic examples is advertising. "Wesson oil doesn't soak through food" - well, that's true. But what's missing is that no oils soak through food (when operated at a certain temperature, which is an additional misleading implication). To do away with this, Feynman makes a few suggestions:
In summary, the idea is to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgement in one particular direction or another.
...
If you've made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish BOTH kinds of results.
These practices are indeed very important, and are often glossed over in the name of brevity or to save money... don't allow yourself to be fooled by silly correlations and inflated numbers. I've found that there are a lot of issues that are quite simply on the outside, but when you dig deep, you find lots of contradicting information, making the issue that much more complex... [link found via USS Clueless in the midst of a discussion of international law, though the entry about "benchmarks" of Macs also seems relevant]
Posted by Mark on July 22, 2002 at 05:47 PM .: link :.


End of This Day's Posts

Saturday, July 13, 2002

Chef Wars
Call Me Lenny by James Grimmelmann : Taco Bell is running a new ad called "Chef Wars" and it is an Iron Chef parody. The commercial is pathetic and James laments that Iron Chef is no longer considered to be a piece of elite culture. Essentially, Iron Chef is no longer cool because it has become so popular that even culturally bereft Taco Bell customers will understand the reference.

As a long time fan of Iron Chef, I suppose I can relate to James. Several years ago, a few drunk friends and I discovered Iron Chef one late night and fell in love with it. In the years that followed, it has grown more and more popular, to the point where there was even an pointless American version (hosted by Bill Shatner) and a rather funny parody on Saturday Night Live. Seeing those things made it less fun to be an Iron Chef fan, and to a certain extent, I agree with that point. But in a different way, Iron Chef is just as cool as it ever was and, in my mind, a genuinely good show is well... good, no matter how popular it is.

As commentor Julia (at the bottom) notes, there are two main issues that James is hitting on:
  1. The watering down of concepts from 30 minutes to 30 seconds completely distorts and lessens the impact of the elements that make the original great.
  2. The idea that a cultural item becomes less "cool" when it goes from 1 million to 100 million consumers.
Certainly, there is truth in those statements, but that is not all that is at work here. Iron Chef is a great show, and will always be so. After a while, a piece of culture will lose its "new and exciting" flavour, but if the show is good, its good. James gives away how uncool he really is when he admits that he's only seen 6 episodes or so. Isn't it just a sham then? A facade? A ruse? Of what use is the cool if you never really enjoy it?

I suppose it all comes down to exclusion. Things are cool, in part, because you are cool enough to recognize them as such. But if everyone is cool, what's the point? Which brings us to Malcolm Gladwell and his Coolhunt:
"In this sense, the third rule of cool fits perfectly into the second: the second rule says that cool cannot be manufactured, only observed, and the third says that it can only be observed by those who are themselves cool. And, of course, the first rule says that it cannot accurately be observed at all, because the act of discovering cool causes it to take flight, so if you add all three together they describe a closed loop, the hermenuetic circle of coolhunting, a phenomenon whereby not only can the uncool not see cool but cool cannot be even adequately described to them."
But is it cool to just recognize something as cool? James recognized Iron Chef as cool, but he didn't really enjoy it. So I guess that we should seek the cool, but not be fooled into thinking something is cool simply because it is going to be big one day...
Posted by Mark on July 13, 2002 at 02:19 PM .: link :.


End of This Day's Posts

Wednesday, July 10, 2002

The Post 9/11 Doubt
A Heartbreaking Work of Staggering Evil by Nick Mamatas : Neil Gaiman's oeuvre, and the genre of horror/fantasy in general, is typically looked down upon as unsophisticated or childish, and the past decade saw a marked decrease in the Horror genre's relevance.
"9-11 resembled cheap, lazy fiction, and because it did, it made it strange for writers to decide what is valid artistically."
Horror was beginning to find new voices and new readers even before the attacks on the WTC, but now, after a initial period of doubt, there appears to be a renewed interest in the genre... "The everyday twisted horribly awry is, of course, the state of the nation post-9-11." Will Horror become popular again because it evokes fear of the magnitude we all felt on September 11? Time will tell. [thanks BJ]

Just to rewind a bit, I think the period of doubt mentioned above is a very important phenomenon, and I can see it happening all over the place. My very own weblog here, for instance, is a good example. I had posted fairly regularly up until September, focusing mainly on Film and various interesting articles on culture and whatnot, but after 9/11 my posting dropped off sharply and has been irregular ever since. The reason for this, I think, was because I felt that there were more important things in life than my stupid blog. It just seemed so futile. There are certainly other factors, personal and professional, that also contributed to the dropoff, but I also think I needed to re-examine my goals here. My post 9/11 entries were scarce, and they began to lean more towards politics, as I became determined to keep up on current events. But I didn't want to become a warblogger (I still don't), and this limited my ability to post because I didn't want every entry to be about the latest bullet flying over in the Middle East. So I'm hoping that I can live up to the demands of My Shifting Paradigm...
Posted by Mark on July 10, 2002 at 10:37 PM .: link :.


End of This Day's Posts

Sunday, April 14, 2002

Clowns are Scary
Blanky the Clown by riverrun : An E2 peice by the ever brilliant riverrun in which he admits more than a passing discomfort for clowns. In fact, they scare the shit out of him. Given his tale of Blanky, the resident clown in his home town, you could hardly blame him. Though I'll admit a passing discomfort for clowns (and, in fact, the entire carnival setting kinda creeps me out), I've had the fortune of never really crossing paths with them. Anyway, riverrun gives a very brief history of clowns, which have been around for quite some time, followed by the somewhat disturbing tale of Blanky.
Posted by Mark on April 14, 2002 at 10:39 PM .: link :.


End of This Day's Posts

Monday, February 25, 2002

Dynamic Duo
The Physical Genius and The Art of Failure by Malcolm Gladwell: An interesting duo of pseudo-related articles. The first posits the existence of a "physical genius", someone who posesses an "affinity for translating thought into action". The ironic thing about a physical genius, however, is that they really can't be described by cut-and-dry measurements of athleticism (in other words, there is no measuring stick like IQ for a physical genius). There is, in fact, much more to it than merely performing act itself; its knowing what to do. In the other article, The Art of Failure, Gladwell posits that there are two different types of failing: regression and panicking. Regression is when you become so self conscious that you are thinking explicitely about what to do next instead of relying on your instincts and reactions (which you work hard to put into place; years of tennis lessons will give you an innate tennis sense, so to speak - but if you explicitely start think about each step, you will fail). Panicking is a sort of tunnel-vision, in which you are so concerned about one problem, you forget that you already know the usually simple solution.

Of course, Gladwell makes the points ever more elegantly than I just did. In fact, I've found almost all of Gladwell's work fascinating, well researched, and well thought out. I found these two articles interesting because it seems that the physical genius doesn't really regress back to their explicit mode of operation. Why? I think it might be because they never learned these things explicitly, at least, not the same way in which your average person does. They just know what to do, and they do it. I guess that's why they are called "geniuses".
Posted by Mark on February 25, 2002 at 08:46 PM .: link :.


End of This Day's Posts

Thursday, February 21, 2002

Disgruntled, Freakish Reflections™ on Happiness
Civilization, Thermodynamics, and 7-Eleven : "Man has never really solved problems so much as exchange one set for another, and what we call progress has simply been a series of shrewd trades that, while never reaching utopia, have at least left us with more desirable issues than the ones before." Everything has advantages and disadvantages, and we attempt to maximize our advantages while minimizing our disadvantages. But you'll notice that the disadvantages are never really eliminated. This is all well and good, but why do so few people see it? Its almost like we were raised to be unhappy. We're shown what we don't have, we learn that success means winning trophies and money, and that happiness relies on how much stuff we have. We're expected to live our life in constant, multi-orgasmic bliss, and if we find ourself unhappy, then we're a failure. Of course, since we don't live in a Utopia, we will always be unhappy, and thus we will always be seeking new trophies to make us happy. Striving for self-improvement isn't wrong (its quite honorable), but it won't necissarily make you happier. All too often, we set our sights on that one mystical thing that, if we could just achieve it, would make us happy. The only problem is, if you can't be happy now, chances are, you won't be happy in the future, even if you do achieve your goals.

To paraphrase Dennis Miller, happiness doesn't always require resolution, but rather an in the moment, carefree acceptance of the fact that the worst day of being alive is better than any day of being dead. Happiness isn't settling for less, its just not being miserable with what you've got. So reach for the stars, but remember, you're just trading one set of disadvantages with another, and you might not be any happier than you are now...
Posted by Mark on February 21, 2002 at 01:02 PM .: link :.


End of This Day's Posts

Thursday, January 24, 2002

Wing Bowl X
Every year, on the Friday before Super Bowl Sunday, Philadelphians gather at the First Union Center for a different type of contest: The Wing Bowl. A tradition that started 9 years ago, the annual Wing Bowl festivities start at the crack of dawn. The audience tailgates in the parking lot while the contestants prepare to eat as many Buffalo wings as possible in a 30-minute time-span. Its become a hallmark of Philly life, with more than 20,000 people showing up for last years event. Only in Philly. Last year's winner is nicknamed "El Wingador", and he ate 137 wings in 30 minutes (the highest score of all time was 164 wings!)

Particularly interesting, and more disgusting than eating 100+ wings in 30 minutes, are the Qualifying Stunts performed by wing bowl hopefuls. A good stunt typically includes some sort of gross variety of food, eaten quickly and in mass quantities (strange, as I would think that has little to do with your wing-eating ability). Highlights this year include people eating: Four pounds of tripe in 20 minutes, a pigs head (including snout, cheek and the brain), A dozen hard boiled eggs with shells in 24 minutes, fifty raw clams in fifteen minutes, three pounds of head scrapple and bottle of hot sauce in 20 minutes, and one pound uncooked penne pasta with only 8 ounces of water in 20 minutes. Only in Philly...

1/25/02 - Update: El Wingador does it again. 143 total wings (81 in the first 14 minnutes). Three time champ. I love the nicknames these guys have; there was a 15 year old student in the contest - his nickname is Lord of the Wings...
Posted by Mark on January 24, 2002 at 11:27 AM .: link :.


End of This Day's Posts

Wednesday, November 07, 2001

No Whammy, no Whammy, STOP!
Back in May of 1984, history was made as Michael Larsen, an unemployed ice cream truck driver from Ohio, managed to win $110,237 on the classic CBS television game show Press Your Luck. Having watched Press Your Luck since it premiered, Larsen came to the conclusion that the swift, seemingly random flashing lights that bounced around the Press Your Luck board were not as random as they seemed. By taping the show religiously and pausing the tapes, Larsen discovered that there were just six light patterns on the board. With this bit of knowledge, he practiced at home while watching the show and realized that he could stop the board wherever and whenever he wanted, if he just had patience. The article is worth visiting, if only to see the looks on the host's face as Larsen racked up the dough. Ironically, Larsen eventually wound up losing all his winnings in a bad housing investment deal.
Posted by Mark on November 07, 2001 at 11:59 AM .: link :.


End of This Day's Posts

Wednesday, October 10, 2001

Planetarium
Planetarium is an on-line puzzle story in twelve weekly instalments. The story is presented one week at a time; each week containing three puzzles. At the end of the twelve weeks, the answers to the thirty-six puzzles can be put together to solve a metapuzzle, which ties back into the plot of the story. Planetarium is primarily a story, so it doesn't matter if you solve the puzzles or not; they'll tell you the answers after twelve weeks anyway. Each Planetarium instalment consists of an illustration of a scene in the story, framed in a border with other puzzle elements and buttons. Clicking on the characters (or objects) within the illustration evokes text relating to that character - perhaps a dialogue they are having with another character, or part of the story narrative, or possibly a riddle that the character is presenting. I'm only on the first week, but I think I'm hooked.

I found this link via Mindful Link Propagation, which is notable in and of itself, as it is the latest project over at the Laboratorium and it contains many interesting and thoughtful links.
Posted by Mark on October 10, 2001 at 11:58 AM .: link :.


End of This Day's Posts

Tuesday, October 09, 2001

The Fifty Nine Story Crisis
In 1978, William J. LeMessurier, one of the nation's leading structural engineers, received a phone call from an engineering student in New Jersey. The young man was tasked with writing a paper about the unique design of the Citicorp tower in New York. The building's dramatic design was necessitated by the placement of a church. Rather than tear down the church, the designers, Hugh Stubbins and Bill LeMessurier, set their fifty-nine-story tower on four massive, nine-story-high stilts, and positioned them at the center of each side rather than at each corner. This daring scheme allowed the designers to cantilever the building's four corners, allowing room for the church beneath the northwest side.

Thanks to the prodding of the student (whose name was lost in the swirl of subsequent events), LeMessurier discovered a subtle conceptual error in the design of the building's wind braces; they were unusually sensitive to certain kinds of winds known as quartering winds. This alone wasn't cause for worry, as the wind braces would absorb the extra load under normal circumstances. But the circumstances were not normal. Apparently, there had been a crucial change during their manufacture (the braces were fastened together with bolts instead of welds, as welds are generally considered to be stronger than necessary and overly expensive; furthermore the contractors had interpreted the New York building code in such a way as to exempt many of the tower's diagonal braces from loadbearing calculations, so they had used far too few bolts.) which multiplied the strain produced by quartering winds. Statistically, the possibility of a storm severe enough to tear the joint apart was once every sixteen years (what meteorologists call a sixteen year storm). This was alarmingly frequent. To further complicate matters, hurricane season was fast approaching.

The potential for a complete catastrophic failure was there, and because the building was located in Manhattan, the danger applied to nearly the entire city. The fall of the Citicorp building would likely cause a domino effect, wreaking a devestating toll of destruction in New York.

The story of this oversight, though amazing, is dwarfed by the series of events that led to the building's eventual structural integrity. To avert disaster, LeMessurier quickly and bravely blew the whistle - on himself. LeMessurier and other experts immediately drew up a plan in which workers would reinforce the joints by welding heavy steel plates over them.

Astonishingly, just after Citicorp issued a bland and uninformative press release, all of the major newspapers in New York went on strike. This fortuitous turn of events allowed Citicorp to save face and avoid any potential embarrassment. Construction began immediately, with builders and welders working from 5 p.m. until 4 a.m. to apply the steel "band-aids" to the ailing joints. They build plywood boxes around the joints, so as not to disturb the tenants, who remained largely oblivious to the seriousness of the problem.

Instead of lawsuits and public panic, the Citicorp crisis was met with efficient teamwork and a swift solution. In the end, LeMessurier's reputation was enhanced for his courageous honesty, and the story of Citicorp's building is now a textbook example of how to respond to a high-profile, potentially disastrous problem.

Most of this information came from a New Yorker article by Joe Morgenstern (published May 29, 1995) . It's a fascinating story, and I found myself thinking about it during the tragedies of September 11. What if those towers had toppled over in Manhattan? Fortunately, the WTC towers were extremely well designed - they didn't even noticeably rock when the planes hit - and when they did come down, they collapsed in on themselves. They would still be standing today too, if it wasn't for the intense heat that weakened the steel supports.
Posted by Mark on October 09, 2001 at 08:04 AM .: link :.


End of This Day's Posts

Monday, September 10, 2001

Wasting Time
I Play Too Much Solitaire, and it's Putting Me in a Time Warp by Douglas Coupland : Why do I choose to waste time playing solitaire? And why will I, in all likelihood, cheerfully continue to waste thousands more hours playing solitaire? These are questions Coupland, and no doubt, millions of others, have pondered. Interestingly enough, I find that this spills over into much more than solitaire. What of my thousands of NHL 98 or Unreal Tournament games? Or the countless hours spent trolling the net? Time wasted? Perhaps. Will I continue to waste it? Undoubtedly. Why? I have no idea. Coupland's father used to play solitaire all the time, and now, thanks to a computer, he still plays almost every day. When asked why, he replies:
"That's easy. Every time I press the key and it deals me a new round, I get this immense burst of satisfaction knowing that I didn't have to shuffle the cards and deal them myself. Its payback time for all the hours I ever wasted in my life shuffling and dealing cards."
Which brings me to the thought that maybe we aren't really wasting time at all. Maybe we just need to realize that the past is gone, whether we like it or not. By the way, I found Coupland's site insightful and fun, though I'm a bit annoyed at the use of Flash (is it really necessary to put a full text article into flash? It sure as hell makes it difficult to pull quotes!)
Posted by Mark on September 10, 2001 at 11:15 AM .: link :.


End of This Day's Posts

Friday, August 31, 2001

Someone is a werewolf. Someone ... in this very room.
Werewolf is a simple game for a large group of people (seven or more.) Two of the players are secretly werewolves. They are trying to slaughter everyone in the village. Everyone else is an innocent human villager, but one of the villagers is a seer (can detect lycanthropy). Some people call it a party game, but it's a game of accusations, lying, bluffing, second-guessing, assassination, and mob hysteria. Sounds like a blast to me. [via metafilter]

I recently participated in a similar game called "The Mole" in which there are two teams which are trying to complete certain tasks, except that there's a sabateur (a "mole") on each team. Of course, my team emerged victorious, thanks mostly to a brilliant strategy in the opening round, resulting in a commanding lead for my team. The other team became a little bitter about that, as evidenced by this highly biased, but also hilarious mock review of the event (I am the one referred to as "Mark" in said review).
Posted by Mark on August 31, 2001 at 02:37 PM .: link :.


End of This Day's Posts

Wednesday, August 15, 2001

Greatest Hits
The Mob is an American business institution. Killing people is just part of the business, but it's a very costly part. Cops look the other way for burglary or hijacking, but not for murder. The press and the public don't generally tolerate this sort of thing, and yet, those very murders that bring the most powerful wrath of law enforcement and public scrutiny down on the Mob are responsible for their greatest cultural legacy. [Warning: graphic images ahead - proceed at your own risk] Who can forget the picture of Carmine Gallante sprawled on a restaurant floor, cigar in his mouth? Or the bloody picture of Ben "Bugsy" Siegal, his face pretty much blown off? These infamous Mafia hits stick in our consciousness longer than any degree of bootlegging or hijacking ever could

Update: Removed links to images because Google images was acting funny.
Posted by Mark on August 15, 2001 at 09:25 AM .: link :.


End of This Day's Posts

Thursday, July 12, 2001

Customer "Support"
Everyone has had a terrible customer support experience at least once in their life. Those who are cursed into having to deal with customer service often would do well to learn The Art of Turboing. Turboing, essentially, refers to the actions of a customer who goes around the normal technical support process by contacting a senior person in the chain of command. The article does a great job describing the process and how to go about it. The idea of Turboing sounds worse than it is, but it is also made clear that you should turbo only when you've exhausted all other avenues of support and hit a dead end. So go forth, my service-maligned readers, and Turbo your way to victory. Or something. [via memepool]

Some good stuff being discussed over at DyREnet's message board. First, it seems that Drifter has revealed the great secrets of Man.com (the mystery that started with a cryptic and utterly annoying Tandem Story entry on this page). Also, check out the discussion on Coke, including my own moronic exploits with cola.
Posted by Mark on July 12, 2001 at 02:57 PM .: link :.


End of This Day's Posts

Wednesday, July 11, 2001

Searching for Bobby Fischer
A Mystery Wrapped in an Enigma by William Lombardy : A 1974 Sports Illustrated article providing a detailed account of Bobby Fischer's struggle and eventual victory in the 1972 World Chess Championship. I've never been much good at Chess, but I have a certain fascination and respect for those who are. Fischer comes off as emotionally unstable in the article, but I have this sneaking sort of suspicion that every little move (or complaint) he made was calculated. Sometimes he won before he even entered the arena. But then, he is definitely an odd person as well, so who really knows?
Posted by Mark on July 11, 2001 at 04:49 PM .: link :.


End of This Day's Posts

Thursday, July 05, 2001

Probable Monopoly
Probabilities in the Game of Monopoly has all the numbers you could ever possibly need to play Monopoly more efficiently; most probable squares, how long it takes for investments to pay off, which properties are better to mortgage, where to build hotels, which squares get landed on first.
The railroads are excellent investments, particularly when owned together, although in absolute income terms they don't keep up with heavily built on properties later in the game. The best return on investment to be found is from putting a third house on New York Avenue. In fact, the third house has the fastest payoff of any building on almost all of the properties. The square most landed on other than Jail is Illinois Avenue, and in fact a hotel there will bring the most income other than a hotel on Boardwalk. By far the worst individual investment is to buy Medeterranean Avenue without first owning Baltic. That's not to say that you shouldn't buy it, but it's not going to make you much money without quite a bit of construction. The properties between the Jail square and the Go To Jail square are landed on the most, because of the jump caused by landing on Go To Jail. The Orange ones have the biggest bang for the buck as far as building goes.
All the probabilities were conducted with a long term computer simulation. I suppose this whole thing may seem excessive, but it is quite interesting and nice to know that the orange properties are the best to own and build on. The simulations do not, however, take into account all the shady dealings between players (I'll trade you St. Charles Place, which will give you a monopoly, for Baltic Ave. and 5 free passes on any of your properties) that can be ever-so-crucial to the outcome of the game. [via Bifurcated Rivets]
Posted by Mark on July 05, 2001 at 01:01 PM .: link :.


End of This Day's Posts

Friday, June 29, 2001

Industrial Luddite
Is It O.K. to Be a Luddite? by Thomas Pynchon : Luddite. It sounds like an element doesn't it? Basically, a Luddite is someone who opposes technology. Pynchon tackles the subject with his usual gusto:
Except maybe for Brainy Smurf, it's hard to imagine anybody these days wanting to be called a literary intellectual, though it doesn't sound so bad if you broaden the labeling to, say, "people who read and think." Being called a Luddite is another matter. It brings up questions such as, Is there something about reading and thinking that would cause or predispose a person to turn Luddite? Is It O.K. to be a Luddite? And come to think of it, what is a Luddite, anyway?
Pynchon goes into the history of Luddites, from the Ned Lud, straight through to Frankenstein and Star Wars references - oh, and lets not forget that all important folk hero, the Badass. Theres something about scholarly discussion of the Badass that I just find compelling. Anyway, if anyone wants to give themselves a headache, check out Pynchon's acclaimed classic Gravity's Rainbow (and for people who want to lessen the strength of said headache, you can buy a 345 page book containing the Sources and Contexts for Pynchon's Novel). Actually, from what I've read of it (which is, admittedly, not much), its quite good. [via wood s lot]
Posted by Mark on June 29, 2001 at 02:31 PM .: link :.


End of This Day's Posts

Tuesday, June 19, 2001

How Science Ignores the Natural World
Where the Buffalo Roam - How Science Ignores the Natural World : An interview with Vine Deloria, one of the most important living Native American writers. Central to Deloria's critique of Western culture is the understanding that, by subduing nature, we have become slaves to technology and its underlying belief system.
"...Indians experience and relate to a living universe, whereas Western people - especially scientists - reduce all things, living or not, to objects. The implications of this are immense. If you see the world around you as a collection of objects for you to manipulate and exploit, you will inevitably destroy the world while attempting to control it. Not only that, but by perceiving the world as lifeless, you rob yourself of the richness, beauty, and wisdom to be found by participating in its larger design."

"Science insists, at a great price in understanding, that the observer be as detached as possible from the event he or she is observing. Contrast that with the attitude of indigenous people, who recognize that humans must participate in events, not isolate themselves."
This is the sort of thing you don't hear very often and its very interesting. Deloria makes some great points (along with some I don't particularly agree with, but are interesting nonetheless), especially about science and how it attempts to reduce everything to a paradigm. Doing so certainly has its value, but much like every other version of reality that is forwarded, science is not completely satisfactory.
"...the point is to ask the questions, and keep asking them."
Right on. [via liquid gnome]
Posted by Mark on June 19, 2001 at 11:49 AM .: link :.


End of This Day's Posts

Wednesday, June 06, 2001

Motivation
Structured Procrastination : an amazing strategy that converts procrastinators into effective human beings, respected and admired for all that they can accomplish and the good use they make of time. I like this optomistic approach, turning a weakness into a strength. This website itself is basically another project in a long series of attempts aimed at avoiding responsibility. Its funny how I have always noticed this situation, where I seem to be at my most creative when I've got tons of important stuff I should be doing, but never got around to articulating it like this guy did. [via cafedave.net]

Procrastination: "Hard work often pays off after time, but laziness always pays off now."
Posted by Mark on June 06, 2001 at 08:49 AM .: link :.


End of This Day's Posts

Monday, May 14, 2001

Chick Football
Football gets in touch with its feminine side: The Philadelphia Liberty Belles are one of 10 charter members of the National Women's Football League. The 45-woman team, which plays on high school fields and travels by bus, has romped over its first three opponents in an eight-game schedule that runs from April to June. The players buy their own uniforms, pay their own insurance, and raise money with car washes. And they don't earn a cent, despite their ass kicking performace. So far the Belles have schlacked three opponents by a combined score of 106-6. I've never seen them play, but I imagine it being quite an entertaining experience; not just because its women, but because they're genuinely in love with the game of football. Some day, when and if they become profitable, the league might lose that quality, so I hope to catch a game soon...
Posted by Mark on May 14, 2001 at 08:53 AM .: link :.


End of This Day's Posts

Thursday, May 10, 2001

Hope and Gory
Chuck Palahniuk (author of Fight Club) writes about the Olympic wrestling trials. Amateur wrestling, not WWF or any of its ilk. The article for the most part gets it right. I was a wrestler. I have cauliflower ear. I cut too much wieght. I've walked off the mat and puked in a trash can. I broke my thumb once. I had ringworm. I did it all. And I wasn't even that good. So why did I do it? For the life of me, I really can't nail down a solid answer to that question, yet I know that if I could do it again, I would. Palahniuk focuses mostly on the physical pains of wrestling, but there's more to the sport than pain. Pain is a part of it, and its not a bad thing either (and Palahniuk does a good job describing this), but theres a lot of technique, elegance, and beauty in the sport as well. Sometimes it just takes a wrestler to recognize it when its happening. Which, I suppose, is why the sport has such a wierd reputation...
Posted by Mark on May 10, 2001 at 02:04 PM .: link :.


End of This Day's Posts

Friday, April 20, 2001

File this under "Corny"
The Collective Unconsciousness Project is an interesting attempt at creating a non-linear experience based on chance and the user's interactions. Users can contribute to the site by logging their dreams, then they can explore the dreams, which will be an environment that will allow you to travel from dream to dream in a non-linear yet interconnected way - without being made aware of what those connections are, and without being in control of the path you take. The flow will be based on things like the dream you are currently viewing, what you've viewed in the past, what dreams you've entered into your dream log, what emotions are related to that dream, etc. Unexpected connections will be made, with hopefully interesting results. Its not functional yet (not enough people have entered dreams yet), but once it is, I think it would be worth viewing... Go and enter your dreams now (no registration required).
Posted by Mark on April 20, 2001 at 04:41 PM .: link :.


End of This Day's Posts

Tuesday, March 20, 2001

UAIOE for you and me
This Evolution of Alphabets page brings a little known subject to life with sensible, concise animations. You can see the evolution of eight character sets, including our very own Latin character set. Its always nice to see people using web animation for something useful. [via blog.org]
Posted by Mark on March 20, 2001 at 01:09 PM .: link :.


End of This Day's Posts

Saturday, March 17, 2001

GO
In the movie Pi, there are several scenes where the movie's protagonist takes a break from his work to visit his teacher and mentor. During these visits, they play an ancient asian game called Go. Basically, the Go board has a grid and some black and white stones. The rules of Go are incredibly simple, yet mastering the game is a lifelong, and sometimes life-consuming, effort. Indeed, the game is much more than just a game to its devoted players. Some people kill themselves when they lose. Some do it for a living. Some people even believe that it could save our public education system. For others, it represents the Holy Grail of computing (as it is incredibly difficult to program). Pi was originally supposed to pit student and mentor against each other in a game of chess, but they changed it to Go, and the movie benefits greatly. For Go reflects the common themes of the movie; Go represents a certain synthesis between spiritual and rational life...[thanks alt-log]
Posted by Mark on March 17, 2001 at 09:47 AM .: link :.


End of This Day's Posts

Thursday, February 22, 2001

Trapped Inside the Box
In yesterday's exercise, we saw that thinking outside the box was important, but that certainly doesn't mean thinking inside the box isn't important. It is often useful to quickly classify someone or something based on a small set of criteria which may or may not give an accurate description of said person. Its very similar to the information filtering Umberto Eco spoke about in that interview I posted a while back. In certain situations, we absolutely must revert to simple mental models just to filter all the information coming in to us. It doesn't matter how imperfect that filter is, we just need something or else we won't accomplish anything. I'm also fascinated by the ingenuity of people who are forced to think within a box (and the ways they work around it). My favourite example is Isaac Asimov's 3 Laws of Robotics:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where those orders would conflict with the First Law.
  3. A robot must protect its own existence except where such protection would conflict with the First and Second Law.
In Asimov's Robot Novels, he figured out all sorts of clever ways to work around those rules he created. In pushing the limits of the 3 Laws, Asimov was not only working within a box, but also making it an enjoyable experience for the reader. Of course, later in the series, Asimov begins to think outside the box and expands his scope a little, but that doesn't make his 3 Laws obsolete, just more impressive.
Posted by Mark on February 22, 2001 at 06:12 PM .: link :.


End of This Day's Posts

Wednesday, February 21, 2001

Thinking Outside of the Pie
A simple exercise:
The circle to the right represents a pie. Your goal is to cut this pie into 8 pieces using only three lines. Have at it!.

Solution (swipe text below):
The trick to figuring this out is thinking three-dimensionally. First, quarter the circle with two lines (or slices, if you will). Then remember that there is a third dimension that cannot be seen in the picture. If you were to cut along that axis, you would have 8 pieces of pie!
Posted by Mark on February 21, 2001 at 06:30 PM .: link :.


End of This Day's Posts

Sunday, February 18, 2001

Better Living
DyREnet has some useful tips for better living. Samæl's extremely happy with his new Houseplant, while Spencer was let down by her Papermate-Comfort Mate, medium ball, black ink, click-action, writing pen after years of support. DyREnet also has some new and spiffy random taglines. Some of my favourites include: "Still legal in sixteen states.", "no Subliminal mEssages eXistant here", "We're not quite the downfall of man, but we're trying.", and "The masses have spoken; we just didn't listen." Keep it up, DyRE, and I'll have to kill you.

Uh, well, maybe not.
Posted by Mark on February 18, 2001 at 08:43 PM .: link :.


End of This Day's Posts

Thursday, January 25, 2001

When Minotaurs Attack!
Theseus and the Minotaur, an addictive java applet game that is also quite difficult. Theres also a history of Theseus and the Minotaur Mazes and other (easier) mazes. [thanks to eatonweb]
Posted by Mark on January 25, 2001 at 01:11 PM .: link :.


End of This Day's Posts

Wednesday, January 24, 2001

A Conversation on Information
Umberto Eco is a professor of semiotics, philosophy and literature at the University of Bologna in Italy, and he is well known for his academic publications as well as popular fiction such as The Name of the Rose and Foucault's Pendulum (which I am currently reading). In this interview, Eco discusses the Internet, information overload and filtering, hypertext , hypermedia and virtual reality. He was very open minded and articulate in his descriptions and criticism of the internet and information filtering, especially given that the internet was not very developed at the time.
"I am not saying that Internet is, or will be a negative experience. I am saying on the contrary that it is a great chance. Once we have asserted this, I am trying to isolate the possible traps; the possible negative aspects."
Much time is spent discussing information filtering, and why it is necessary to go about such things and how it becomes difficult on a system like the internet because the amount of options is often overwhelming (like going to google and typing Umberto Eco and getting back 61,200 results). Another topic is communities on the internet. He is enthusiastic at the possibilities but he adds that the information still must be filtered. You must choose which posts and authors you wish to read, and we often choose them randomly, but if we had a filter we could know which posts are important and which are crap. Regardless, he likes the idea of finding new ideas and perspectives through the internet community. "Is that a substitute for face-to-face contact and community? No, it isn't!" Fascinating stuff.
Posted by Mark on January 24, 2001 at 12:28 PM .: link :.


End of This Day's Posts

Tuesday, December 19, 2000

Role Playing
Check out The Window, for role playing the way it should be ("simple, usable, and universal"). The Three Precepts on which it is based are solid and actually contribute to the storytelling aspects of RPGs (as evidenced in the third precept: "A good story is the central goal." ) Check it out, I found it fascinating (and I don't even play RPGs anymore). In fact, some of those ideas there have inspired me to perhaps create a different form of Tandem Story...
Posted by Mark on December 19, 2000 at 01:51 PM .: link :.


End of This Day's Posts

Monday, December 18, 2000

Ushering in Twelve Eighteen
Yes, today is twelve eighteen. What, you may ask, is twelve eighteen? Well, its one two one eight. Before you ask, one two one eight is twelve eighteen. What the hell does this have to do with anything? Everything, of course. Chaos theorists have pondered those stories carefully (specifically the Yankee Stadium incident and the mathematics of 1218), and some believe them to be central in gaining the necessary understanding of the universe.
Posted by Mark on December 18, 2000 at 12:23 PM .: link :.


End of This Day's Posts

Thursday, December 14, 2000

Lost Luggage
Ever wonder what the airlines do with your luggage? Sure, they claim 97% of lost luggage are returned to their rightful owners within 24 hours and another 1.5% within 2 days, but what about the remaining 1.5%? Well, after 6 weeks, they sell it (and going by the percentages, this works out to be somewhere around 435, 000 bags). Apparently most of the lost bags end up in a small Alabama town at the Unclaimed Baggage Center, where they, in turn, sell the contents of the lost bags at discount prices. In case you don't feel like hopping on a plane to visit Alabama (what would you do with your luggage?), you can always visit their webpage and buy stuff online.
Posted by Mark on December 14, 2000 at 12:45 PM .: link :.


End of This Day's Posts

Tuesday, December 12, 2000

Snacktastrophies
Pudding-Factory Disaster Brings Slow, Creamy Death to Town Below: This article ran a while ago, but I think its the funniest thing I have ever read over at The Onion. An exerpt: "Sweet, creamy death swept through this small Illinois town Monday... burying hundreds of residents in a rich, smooth tidal wave of horrifying pudding goodness." Priceless descriptions of the tragic, tragic horror of the delectably Choco-Licious™ death-pudding.
Posted by Mark on December 12, 2000 at 12:27 PM .: link :.


End of This Day's Posts

Monday, December 11, 2000

Demotivators
To find the perfect gift for those hopeless people in your life, go to Despair, Inc., a company that sells demotivational posters similar to the popular motivational posters found in most business settings. My favourite demotivators:
  • Procrastination - "Hard work often pays off after time, but laziness always pays off now."
  • Apathy - "If we don't take care of the customer, maybe they'll stop bugging us." {I printed this out and put it in my cube.}
  • Blame - "The secret to success is knowing who to blame for your failures."
LOL. I might actually buy one of these someday...
Posted by Mark on December 11, 2000 at 12:38 PM .: link :.


End of This Day's Posts

Friday, December 08, 2000

Bah Humbug
Lawyer Wants To Bar Christmas as Federal Holiday: This grinch has been trying to steal Christmas for almost 3 years now, with the argument that having Christmas as a federal holiday is a violation of Church and state. "the Christmas holiday amounts to a government approval for a day of Christian religious origins marking the birth of Jesus Christ" This guy obviously doesn't know much about the History of Christmas, which has its origins in pagan rituals that were later adopted by Christianity to celebrate the birth of Christ. In my opinion, Christmas is such a wonderous holiday because of its secular aspects, including holly, ivy, Mistletoe, Christmas trees, Santa Claus, snowmen, jingling bells and presents on Christmas morning (which have been repeatedly recognized by US Courts). Furthermore, this is a season who's very message transcends any specific religion, ideology, or tradition to become an occasion for collective reflection on the values of what brings us together. Lets just hope the Courts stand firm...
Posted by Mark on December 08, 2000 at 09:30 AM .: link :.


End of This Day's Posts

Tuesday, December 05, 2000

Artificial Idiocy
This java applet attempts to implement the classic "Eliza" program. It pretends to be a Rogerian psychologist. It was groundbreaking in its time, but it is ultimately a lacking AI system (that or Rogerian psychologists are complete morons, which is probably not too far from the truth). Its pretty easy to take advantage of the system. As DyRE found out, Never Go to a Rogerian Psychologist When You're On Fire.
Posted by Mark on December 05, 2000 at 08:52 AM .: link :.


End of This Day's Posts

Thursday, November 30, 2000

Absurdity
Some recent headlines (no, they are not from the onion, but they probably should be):
Posted by Mark on November 30, 2000 at 12:49 PM .: link :.


End of This Day's Posts

Tuesday, October 31, 2000

Exorcism
Some interesting happenings in the world of Exorcism. In a recent study that highlights the bendability of memory and perception, psychologists were able to convince normally skeptical people that they had experienced a possession at some point in their life. As if marking the occasion, the 1973 classic film, The Exorcist was recently re-released, and pyschologists expect a rash of new possessions. Also, it seems the old rite of exorcism is gaining new respect. I read the book by William Peter Blatty a while back, and was suprised at just how detailed the psychological aspect of the story was. By the end of the book, I still was unsure of whether or not the possession was caused by psychological influences or some supernatural power. In fact, the rite of exorcism was shown to be a very scientific method and I was duly impressed with the novel's objective study. However, the book does not quite capture the pea-soup-projectile-vomit themes too well :-)
Posted by Mark on October 31, 2000 at 06:08 PM .: link :.


End of This Day's Posts

Friday, October 27, 2000

Terror Behind the Walls
I recently visited Eastern State Penitentiary's haunted house, Terror Behind the Walls. It was a pretty good haunted house; my only complaint is that there were way too many people walking through with me (thus I saw many of the people in front of me get scared). The creepiest part, however, was simply walking down the dark corridors of the old, decaying site, looking into the cells and seeing only darkness. At the end of the tour, there was a small museum showing the far more interesting history of the old penitentiary.

Eastern State Penitentiary was built in the 1820s under the Quaker philosophy of reform through solitude and reflection, and has held the likes of Al Capone and Willie Sutton. Covering around 11 acres in Philadelphia, it has become a Historic Site. From the moment he arrived until the moment he left, the prisoner would see no one. The furniture of the 8x12 cell consisted of a mattress and a bible. "...Silence, solitude, the bible, never a moment of human contact, never a voice heard at a distance, the dead world of a living tomb..." In the end, the solitary confinement of Eastern State ended up driving most of its inmates insane, until 1903 when the idea of complete isolation was abandoned. By the time Eastern State was closed in 1971, it had become just another old, crowded prison with the usual share of brutality, riots, hunger strikes, escapes, suicides, and scandals. I think a regular guided tour and commentary would be scarier than the haunted house was...
Posted by Mark on October 27, 2000 at 10:34 AM .: link :.


End of This Day's Posts

Friday, October 06, 2000

Light the Lamp
Lets go Flyers! Hockey season doth rock, and the Flyers won their season opener 6-3. Young Justin Williams looked mighty impressive, but rookies have a way of starting strong and dropping off fast. The Flyers themselves looked ok, but they were still making a bunch of stupid mistakes that could have cost them the game. I predict that they will get clobbered by Boston on Saturday (by a score of 5-1).
Posted by Mark on October 06, 2000 at 09:14 AM .: link :.


End of This Day's Posts

Tuesday, September 19, 2000

Bert is Evil!
Hold on to your crackpipes, kiddies, its time for a piece of classic web trash: Bert is Evil!. One of the funniest things I have ever seen on the web.
Posted by Mark on September 19, 2000 at 01:37 PM .: link :.


End of This Day's Posts

Where am I?
This page contains entries posted to the Kaedrin Weblog in the Culture Category.

Inside Weblog
Archives
Best Entries
Fake Webcam
email me
Kaedrin Beer Blog

Archives
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000

Categories
12 Days of Christmas
2006 Movie Awards
2007 Movie Awards
2008 Movie Awards
2009 Movie Awards
2010 Movie Awards
2011 Fantastic Fest
2011 Movie Awards
2012 Movie Awards
6 Weeks of Halloween
Administration
Anime
Arts & Letters
Atari 2600
Beer
Best Entries
Commodore 64
Computers & Internet
Culture
Disgruntled, Freakish Reflections
Harry Potter
Hitchcock
Humor
Link Dump
Lists
Military
Movies
Music
Neal Stephenson
NES
Philadelphia Film Festival 2006
Philadelphia Film Festival 2008
Philadelphia Film Festival 2009
Philadelphia Film Festival 2010
Politics
Science & Technology
Science Fiction
Security & Intelligence
The Dark Tower
Uncategorized
Video Games
Weblogs
Weird Movie of the Week
Green Flag



Copyright © 1999 - 2012 by Mark Ciocco.