Sunday, March 31, 2013
TV Shows I Should Probably Catch Up With
As 2013 progresses, I realize that I'm watching much less in the way of movies lately, and catching up with more television series. In terms of "appointment television", I still don't watch much, but I do like to catch up with some older seasons of good stuff, and streaming sites like Netflix are a big enabler on some of this stuff. So what are some things I should probably catch up with?
Wednesday, February 27, 2013
Recent and Future Podcastery
I have a regular stable of podcasts that generally keep me happy on a weekly basis, but as much as I love all of them, I will sometimes greedily consume them all too quickly, leaving me with nothing. Plus, it's always good to look out for new and interesting stuff. Quite frankly, I've not done a particularly good job keeping up with the general podcasting scene, so here's a few things I caught up with recently (or am planning to listen to in the near future):
Sunday, December 23, 2012
Holiday Link Dump
Things are getting festive around here, so here's a few quick links for your holiday enjoyment:
Wednesday, September 12, 2012
Podcasts are weird. I often find myself buried under hours of great podcastery, I can barely keep up. But then every once in a while, like this past weekend, I abruptly run out of things to listen to. Oh sure, there are plenty of backup things or middling podcasts that I can fall back on, but I like to look forward to stuff too. Here are some recent podcasts that I've checked out, some great, some I'm not so sure about.
Sunday, July 15, 2012
What is good?
Ian Sales thinks he knows:
I've lost count of the number of times I've been told "good is subjective" or "best is subjective". Every time I hear it, it makes me howl with rage. Because it is wrong.The irony here is that I've lost count of the number of times I've been told that "good is objective". And yet, no one seems to be able to define what constitutes good. Even Ian, despite his adamant stance, describes what is good in entirely subjective terms.
It is not an exact science, and it is subject to changes in taste and/or re-evaluation in light of changes in attitudes and sensibilities. But there are certain key indicators in fiction which can be used to determine the quality of that piece of fiction.Having established that there are key indicators that can be used to determine quality, Sales proceeds to list... approximately none of them. Instead, he talks about "taste" and "changes in attitudes and sensibilities" (both of which are highly subjective). If it's not an "exact science", how is it objective? Isn't this an implicit admission that subjectivity plays a role? He does mention some criteria for bad writing though:
Perhaps it's easier to describe what is bad - if good is subjective, then by definition bad must be too. Except, strangely, everyone seems to agree that the following do indeed indicate that a piece of fiction is bad: cardboard cutout characters, idiot plotting, clumsy prose, tin-earred dialogue, lack of rigour, graceless info-dumping, unoriginality, bad research...The problem with this is that most of his indicators are subjective. Some of them could contain a nugget of objectivity, notably the "bad research" piece, but others are wholly subjective. What exactly constitutes "tin-eared dialogue"? One person's cardboard cutout character is another person's fully realized and empathetic soul.
Perhaps it's my engineering background taking over, but I have a pretty high standard for objectivity. There are many objective measures of a book, but most of those aren't very useful in determining the book's quality. For instance, I can count the number of letters or words in the book. I can track the usage of punctuation or contractions. Those numbers really won't tell me much, though. I can look at word distribution and vocabulary, but then, there are a lot of classics that don't use flowery language. Simplicity sometimes trumps complexity. I can evaluate the grammar using the standards of our language, but by those measures, James Joyce and Thomas Pynchon would probably be labeled "bad" writers. For that matter, so would Ian, who's recent novella Adrift on the Sea of Rains eschews the basic grammatical convention of using quotations for dialogue. But they're not bad writers, in large part because they stray from the standards. Context is important. So that's not really that useful either.
The point of objectivity is to remove personal biases and feelings from the equation. If you can objectively measure a book, then I should be able to do the same - and our results should be identical. If we count the words in a book, we will get the same answer (assuming we count correctly). Similarly, if we're able to objectively measure a book's quality, you and I should come to the same conclusion. Now, Ian Sales has read more books than me. The guy's a writer, and he knows his craft well, so perhaps the two of us won't see eye to eye on a lot of things. But even getting two equivalently experienced people to agree on everything is a fool's errand. Critical reading is important. Not everyone that subverts grammatical conventions is doing so well or for good reason. Sometimes simplicity can be elegant, sometimes it feels clumsy. Works of art need to be put into the cultural and historical context, and thus a work should stand up to some sort of critical examination. But critical is not equivalent to objective.
Now, Ian does have an interesting point here. If what's "good" is subjective, then how is that a valuable statement?
If good is subjective, then awards are completely pointless. And studying literature, well, that's a complete waste of time too. After all, how can you be an expert in a topic in which one individual's value judgment is worth exactly the same another person's? There'd be no such thing as an expert. All books would have exactly the same artistic value.Carried to its logical extreme, the notion that what's "good" is wholly subjective does complicate matters. I don't think I'd go quite as far as Ian did in the above referenced paragraph, but maybe he's on to something.
So far, I have mentioned a bunch of questions that Ian asked, which I will now try to give an answer to:
We can devise whatever measurements we want, we can come up with statistical sampling models that will take into account sales and votes and prizes and awards and academic praise and journal mentions, whatever. I actually find those to be interesting and fun exercises, but they're just that. They ultimately aren't that important to history. I'd bet that the things from our era that are commonly referenced 200 years from now would seem horribly idiosyncratic and disjointed to us...
Sales concludes with this:
If you want to describe a book in entirely subjective terms, then tell people how much you enjoyed it, how much you liked it. That's your own personal reaction to it. It appealed to you, it entertained you. That's the book directly affecting you. Another person may or may not react the same way, the book might or might not do the same to them.He's not wrong about that. Enjoyment is subjective. But if we divorce the concept of "good" from the concept of "enjoyment", what are we left with? It's certainly a useful distinction to make at times. There are many things I "like" that I don't think are particularly "good" on any technical level. I'm not saying that a book has to be "enjoyable" to be "good", but I don't think they're entirely independent either. There are many ways to measure a book. For the most part, in my opinion, the objective ones aren't very useful or predictive by themselves. You could have an amazingly well written book (from a prose standpoint) put into service of a poorly plotted story, and then what? On the other hand, complete subjectivity isn't exactly useful either. You fall into the trap that Ian lays out: if everything is entirely subjective, then there is no value in any of it. That's why we have all these elaborate systems though. We have markets that lead to sales numbers, we have awards (with large or small juries, working together or sometimes independently), we have academics, we have critics, we have blogs, we have reviews, we have friends whose opinions we trust, we have a lot of things we can consider.
In chaos theory, even simple, orderly systems display chaotic elements. Similarly, even the most chaotic natural systems have some sort of order to them. This is, of course, a drastic simplification. One could argue that the universe is headed towards a state of absolute entropy; the heat death of the universe. Regardless of the merits of this metaphor, I feel like the push and pull of objectivity is similar. Objective assessments of novels that are useful will contain some element of subjectivity. Similarly, most subjective assessments will take into account objective measurements. In the end, we do our best with what we've got. That's my opinion, anyway.
Wednesday, June 27, 2012
A few years ago, Malcolm Gladwell wrote an article called How David Beats Goliath, and the internets rose up in nerdy fury. Like a lot of Gladwell's work, the article is filled with anecdotes (whatever you may think of Gladwell, he's a master of anecdotes), most of which surround the notion of a full-court press in basketball. I should note at this point that I absolutely loath the sport of basketball, so I don't really know enough about the mechanics of the game to comment on the merits of this strategy. That being said, the general complaint about the article is that Gladwell chose two examples that aren't really representative of the full-court press. The primary example seems to be a 12 year old girls basketball team, coached by an immigrant unfamiliar with the game:
Ranadive was puzzled by the way Americans played basketball. He is from Mumbai. He grew up with cricket and soccer. He would never forget the first time he saw a basketball game. He thought it was mindless. Team A would score and then immediately retreat to its own end of the court. Team B would inbound the ball and dribble it into Team A's end, where Team A was patiently waiting. Then the process would reverse itself. A basketball court was ninety-four feet long. But most of the time a team defended only about twenty-four feet of that, conceding the other seventy feet. Occasionally, teams would play a full-court press—that is, they would contest their opponent's attempt to advance the ball up the court. But they would do it for only a few minutes at a time. It was as if there were a kind of conspiracy in the basketball world about the way the game ought to be played, and Ranadive thought that that conspiracy had the effect of widening the gap between good teams and weak teams. Good teams, after all, had players who were tall and could dribble and shoot well; they could crisply execute their carefully prepared plays in their opponent's end. Why, then, did weak teams play in a way that made it easy for good teams to do the very things that made them so good?The strategy apparently worked well, to the point where they made it to the national championship tournament:
The opposing coaches began to get angry. There was a sense that Redwood City wasn't playing fair - that it wasn't right to use the full-court press against twelve-year-old girls, who were just beginning to grasp the rudiments of the game. The point of basketball, the dissenting chorus said, was to learn basketball skills. Of course, you could as easily argue that in playing the press a twelve-year-old girl learned something much more valuable - that effort can trump ability and that conventions are made to be challenged.Most of the criticism of this missed the forest for the trees. A lot of people nitpicked some specifics, or argued as if Gladwell was advocating for all teams playing a press (when he was really just illustrating a broader point that underdogs don't always need to play by the stronger teams' conventions). One of the most common complaints was that "the press isn't always an advantage" which I'm sure is true, but again, it kinda misses the point that Gladwell was trying to make. Tellingly, most folks didn't argue about Gladwell's wargame anecdote, though you could probably make similar nitpicky arguments.
Anyway, the reason I'm bringing this up three years after the fact is not to completely validate Gladwell's article or hate on his critics. As I've already mentioned, I don't care a whit about basketball, but I do think Gladwell has a more general point that's worth exploring. Oddly enough, after recently finishing the novel Redishirts, I got an itch to revisit some Star Trek: The Next Generation episodes and rediscovered one of my favorite episodes. Oh sure, it's not one of the celebrated episodes that make top 10 lists or anything, but I like it nonetheless. It's called Peak Performance, and it's got quite a few parallels to Gladwell's article.
The main plot of the episode has to do with a war simulation exercise in which the Enterprise engages in a mock battle with an inferior ship (with a skeleton crew lead by Commander Riker). There's an obvious parallel here between the episode and Gladwell's article (when asked how a hopelessly undermatched ship can compete with the Enterprise, Worf responds "Guile."), but it's the B plot of the episode that is even more relevant (the main plot goes in a bit of a different direction due to some meddling Ferengi).
The B plot concerns the military strategist named Kolrami. He's acting as an observer of the exercise and he's arrogant, smarmy, and condescending. He's also a master at Strategema, one of Star Trek's many fictional (and nonsensical) games. Riker challenges this guy to a match because he's a glutton for punishment (this really is totally consistent with his character) - he just wants to say that he played the master, even if he lost... which, of course, he does. Later, Dr. Pulaski volunteers Data to play a game, with the thought being that the android would easily dispatch Kolrami, thus knocking him down a peg. But even Data loses.
Data is shaken by the loss. He even removes himself from duty. He expected to do better. According to the rules, he "made no mistakes", and yet he still lost. After analyzing his failure and discussing the matter with the captain (who basically tells Data to shut up and get back to work), Data resumes his duty, eventually even challenging Kolrami to a rematch. But this time, Data alters his premise for playing the game. "Working under the assumption that Kolrami was attempting to win, it is reasonable to assume that expected me to play for the same goal." But Data wasn't playing to win. He was playing for a stalemate. Whenever opportunities for advancement appeared, Data held back, attempting to maintain a balance. He estimated that he should be able to keep the game going indefinitely. Frustrated by Data's stalling, Kolrami forfeits in a huff.
There's an interesting parallel here. Many people took Gladwell's article to mean that he thought the press was a strategy that should be employed by all teams, but that's not really the point. The examples he gave were situations in which the press made sense. Similarly, Data's strategy of playing for stalemate was uniquely suited to him. The reason he managed to win was that he is an android without any feelings. He doesn't get frustrated or bored, and his patience is infinite. So while Kolrami may have technically been a better player, he was no match for Data once Data played to his own strengths.
Obviously, quoting fiction does nothing to bolster Gladwell's argument, but I was struck by the parallels. One of the complaints to Gladwell's article that rang at least a little true was that the article's overarching point was "so broad and obvious as to be not worth writing about at all." I don't know that I fully buy that, as a lot of great writing can ultimately be boiled down to something "broad and obvious", but it's a fair point. On the other hand, even if you think that, I do find that there's value in highlighting examples of how it's done, whether it's a 12 year old girls basketball team, or a fictional android playing a nonsensical (but metaphorically apt) game on a TV show. It seems that human beings sometimes need to be reminded that thinking outside the box is an option.
Wednesday, May 02, 2012
Tweets of Glory
One of the frustrating things about Twitter is that it's impossible to find something once it's gone past a few days. I've gotten into the habit of favoriting ones I find particularly funny or that I need to come back to, which is nice, as it allows me to publish a cheap Wednesday blog entry (incidentally, sorry for the cheapness of this entry) that will hopefully still be fun for folks to read. So here are some tweets of glory:
Note: This was Stephenson's first tweet in a year and a half.
This one is obviously a variation on a million similar tweets (and, admit it, it's a thought we've all had), but the first one I saw (or at least, favorited - I'm sure it's far from the first time someone made that observation though)
Well, that happened. Stay tuned for some (hopefully) more fulfilling content on Sunday...
Wednesday, April 25, 2012
I like podcasts and listen to many different ones, but it seems that the ones that I actually look forward to are few and far between. Here are a few recent additions to the rotation:
Wednesday, April 18, 2012
I'm gonna be taking a trip to The Cabin in The Woods tonight, so time is sparse, thus some linkys for you:
Sunday, April 15, 2012
When the whole Kickstarter thing started, I went through a number of phases. First, it's a neat idea and it leverages some of the stuff that makes the internet great. Second, as my systems analyst brain started chewing on it, I had some reservations... but that was shortlived as, third, some really interesting stuff started getting funded. Here are some of the ones I'm looking forward to:
Wednesday, August 17, 2011
More on Spoilers
I recently wrote about the unintended consequences of spoiler culture, and I just came across this post which has been making waves around the internets. That post points to a study which concluded that readers actually like to have a story "spoiled" before they start reading.
The U.C. San Diego researchers, who compiled this chart showcasing the spoiler ratings of three genres (ironic twist stories, mysteries or literary stories), posited this about their findings: "once you know how it turns out, it’s cognitively easier - you’re more comfortable processing the information - and can focus on a deeper understanding of the story."Jonah Lehrer apparently goes so far as to read the last 5 pages of the novels he reads, just so he has an idea where the story's headed. He clearly approves of the research's conclusions, and makes a few interesting observations, including:
Surprises are much more fun to plan than experience. The human mind is a prediction machine, which means that it registers most surprises as a cognitive failure, a mental mistake. Our first reaction is almost never “How cool! I never saw that coming!” Instead, we feel embarrassed by our gullibility, the dismay of a prediction error. While authors and screenwriters might enjoy composing those clever twists, they should know that the audience will enjoy it far less.Interestingly, a few years ago, I posted about this conundrum from the opposite end. Author China Miéville basically thinks it's extremely difficult, maybe even impossible, to write a crime story or mystery with a good ending:
Reviews of crime novels repeatedly refer to this or that book’s slightly disappointing conclusion. This is the case even where reviewers are otherwise hugely admiring. Sometimes you can almost sense their bewilderment when, looking closely at the way threads are wrapped up and plots and sub-plots knotted, they acknowledge that nothing could be done to improve an ending, that it works, that it is ‘fair’ (a very important quality for the crime aficionado - no last-minute suspects, no evidence the reader hasn’t seen), that it is well-written, that it surprises… and yet that it disappoints.There's a lot to parse out above, but I have two thoughts on the conclusions raised by the original study. First is that there may actually be something to the cognitive benefits theory of why people like this. The theory and methodology of interpretation of text is referred to as hermeneutics*. This is a useful field because language, especially figurative language, is often obscure and vague. For example, in the study of religious writings, it is often found that they are written in a certain vernacular and for a specific audience. In order to truly understand said writings, it is important to put them in their proper cultural and historical context. You can't really do that without knowing what the text says in the first place.
This is what's known as the hermenutic circle. It's kinda like the application of science to interpretation. Scientists start by identifying a problem, and they theorize the answer to that problem. In performing and observing their experiment to test the problem, they gain new insights which must then be used to revise their hypothesis. This is basically a hermeneutic circle. To apply it to the situation at hand: When reading a book, we are influenced by our overall view of the book's themes. But how are we to know the book's themes as a whole if we have not yet finished reading the parts of the book? We need to start reading the book with our own "pre-understanding", from which we hypothesize a main theme for the whole book. After we finish reading the book, we go back to each individual chapter with this main theme in mind to get a better understanding of how all the parts relate to the whole. During this process, we often end up changing our main theme. With the new information gained from this revision, we can again revise our main theme of the book, and so on, until we can see a coherent and consistent picture of the whole book. What we get out of this hermeneutic circle is not absolute and final, but it is considered to be reasonable because it has withstood the process of critical testing.
This process in itself can be fulfilling, and it's probably why folks like Jonah Lehrer don't mind spoilers - it gives them a jump start on the hermeneutic circle.
Second, the really weird thing about this study is that it sorta misses the point. As Freddie points out:
The whole point of spoilers is that they're unchosen; nobody really thinks that there's something wrong with people accessing secrets and endings about art they haven't yet consumed. What they object to is when spoilers are presented in a way that an unsuspecting person might unwittingly read them. The study suggests that people have a preference for knowing the ending, but preference involves choice. You can't deliberately act on a preference for foreknowledge of plot if you are presented the information without choosing to access it.And that's really the point. Sometimes I don't mind knowing the twist before I start watching/reading something, but there are other times when I want to go in completely blind. Nothing says that I have to approach all movies or books (or whatever) exactly the same way, every time. And context does matter. When you see a movie without knowing anything about it, there can be something exhilarating in the discovery. That doesn't mean I have to approach all movies that way, just that the variety is somethings a good thing.
* - Yeah, I plundered that entry that I wrote for everything2 all those years ago pretty heavily. Sue me.
Wednesday, July 27, 2011
I like podcasts, but it's depressingly hard to find ones that I really enjoy and which are still regularly published. I tend to discover a lot of podcasts just as they're going through their death throes. This is sometimes ok, as I'm still able to make my way through their archives, but then I run out of content and have to start searching for a new podcast. I will often try out new podcasts, but I have only added a few to the rotation of late. Here's some recent stuff I've been listening to:
Sunday, July 10, 2011
Flow and Games
When I read a book, especially a non-fiction book, I usually find myself dog-earing pages with passages I find particularly interesting or illuminating. To some book lovers, I'm sure this practice seems barbaric and disrespectful, but it's never really bothered me. Indeed, the best books are the ones with the most dog-ears. Sometimes there are so many dog-ears that the width of the book is distorted so that the top of the book (which is where the majority of my dog-ears go) is thicker than the bottom. The book Flow, by Mihaly Csikszentmihalyi1 is one such book.
I've touched on this concept before, in posts about Interrupts and Context Switching and Communication. This post isn't a direct continuation of that series, but it is related. My conception of flow in those posts is technically accurate, but also imprecise. My concern was mostly focused around how fragile the state of flow can be - something that Csikszentmihalyi doesn't spend much time on in the book. My description basically amounted to a state of intense concentration. Again, while technically accurate, there's more to it than that, and Csikszentmihalyi equates the state with happiness and enjoyment (from page 2 of my edition):
... happiness is not something that happens. It is not the result of good fortune or random chance. It is not something that money can buy or power command. It does not depend on outside events, but, rather, on how we interpret them. Happiness, in fact, is a condition that must be prepared for, cultivated, and defended privately by each person. People who learn to control inner experience will be able to determine the quality of their lives, which is as close as any of us can come to being happy.In essence, the world is a chaotic place, but there are times when we actually feel like we have achieved some modicum of control. When we become masters of our own fate. It's an exhilarating feeling that Csikszentmihalyi calls "optimal experience". It can happen at any time, whether external forces are favorable or not. It's an internal condition of the mind. One of the most interesting things about this condition is that it doesn't feel like happiness when it's happening (page 3):
Contrary to what we usually believe, moments like these, the best moments of our lives, are not the passive, receptive, relaxing times - although such experiences can also be enjoyable, if we have worked hard to attain them. The best moments usually occur when a person's body or mind is stretched to its limits in a voluntary effort to accomplish something difficult and worthwhile. Optimal experience is thus something that we make happen. For a child, it could be placing with trembling fingers the last block on a tower she has built, higher than any she has built so far; for a swimmer, it could be trying to beat his own record; for a violinist, mastering an intricate musical passage. For each person there are thousands of opportunities, challenges to expand ourselves.This is an interesting observation. The best times of our lives are often hectic, busy, and frustrating while they're happening, and yet the feeling of satisfaction we get after-the-fact seems worth the effort. Interestingly, since Flow is a state of mind, experiences that are normally passive can become a flow activity through taking a more active role. Csikszentmihalyi makes an interesting distinction between "pleasure" and "enjoyment" (page 46):
Experiences that give pleasure can also give enjoyment, but the two sensations are quite different. For instance, everyone takes pleasure in eating. To enjoy food, however, is more difficult. A gourmet enjoys eating, as does anyone who pays enough attention to a meal so as to discriminate the various sensations provided by it. As this example suggests, we can experience pleasure without any investment of psychic energy, whereas enjoyment happens only as a result of unusual investments of attention. A person can feel pleasure without any effort, if the appropriate centers in his brain are electrically stimulated, or as a result of the chemical stimulation of drugs. But it is impossible to enjoy a tennis game, a book, or a conversation unless attention is fully concentrated on the activity.As someone who watches a lot of movies and reads a lot of books, I can definitely see what Csikszentmihalyi is saying here. Reading a good book will not always be a passive activity, but a dialogue2. Rarely do I accept what someone has written unconditionally or without reserve. For instance, in the passage above, I remember thinking about how arbitrary Csikszentmihalyi's choice of terms was - would the above passage be any different if we switched "pleasure" and "enjoyment"? Ultimately, that doesn't really matter. Csikszentmihalyi's point is that there's a distinction between hedonistic, passive experiences and complex, active experiences.
There is, of course, a limit to what we can experience. In a passage that is much more concise than my post on Interrupts and Context Switching, Csikszentmihalyi expands on this concept:
Unfortunately, the nervous system has definite limits on how much information it can process at any given time. There are just so many "events" that can appear in consciousness and be recognized and handled appropriately before they begin to crowd each other out. Walking across a room while chewing bubble gum at the same time is not too difficult, even though some statesmen have been alleged to be unable to do it; but, in fact, there is not that much more that can be done concurrently. Thoughts have to follow each other, or they get jumbled. While we are thinking about a problem we cannot truly experience either happiness or sadness. We cannot run, sing, and balance the checkbook simultaneously, because each one of those activities exhausts most of our capacity for attention.In other words, human beings are kinda like computers in that we execute instructions in a serial fashion, and things like context switches are quite disruptive to the concept of optimal experience3.
Given all of the above, it's easy to see why there isn't really an easy answer about how to cultivate flow. Csikszentmihalyi is a psychologist and is thus quite careful about how he phrases these things. His research is extensive, but necessarily imprecise. Nevertheless, he has identified eight overlapping "elements of enjoyment" that are usually present during flow. Through his extensive interviews, he has noticed at least a few of these major components come up whenever someone discusses a flow activity. A quick summary of the components (pages 48-67):
First, to a large extent, I think this helps explain why video games are so popular. Indeed, many of the flow activities in the book are games or sports. Chess, swimming, dancing, etc... He doesn't mention video games specifically, but they seem to fit the mold. Skills are certainly involved in video games. They require concentration and thus often lead to a loss of self-consciousness and lack of awareness of the outside world. They cause you to lose track of time. They permit a palpable sense of control over their digital environment (indeed, the necessity of a limited paradigm of reality is essential to video games, which lends the impression of control and agency to the player). And perhaps most importantly, the goals are usually very clear and the feedback is nearly instantaneous. It's not uncommon for people to refer to video games in terms of addiction, which brings up an interesting point about flow (page 70):
The flow experience, like everything else, is not "good" in an absolute sense. It is good only in that it has the potential to make life more rich, intense, and meaningful; it is good because it increases the strength and complexity of the self. But whether the consequences of any particular instance of flow is good in a larger sense needs to be discussed and evaluated in terms of more inclusive social criteria. The same is true, however, of all human activities, whether science, religion, or politics.Flow is value neutral. In the infamous words of Buckethead, "Like the atom, the flyswatter can be a force for great good or great evil." So while video games could certainly be a flow activity, are they a good activity? That is usually where the controversy stems from. I believe the flow achieved during video game playing to be valuable, but I can also see why some wouldn't feel that way. Since flow is an internal state of the mind, it's difficult to observe just how that condition is impacting a given person.
Another implication that kept occurring to me throughout the book is what's being called "The gamification of everything". The idea is to use the techniques of game design to get people interested in what are normally non-game activities. This concept is gaining traction all over the place, but especially in business. For example, Target encouraged their cashiers to speed up checkout of customers by instituting a system of scoring and leaderboards to give cashiers instant feedback. In the book, Csikszentmihalyi recounts several examples of employees in seemingly boring jobs, such as assembly lines, who have turned their job from a tedious bore to a flow activity thanks to measurement and feedback. There are a lot of internet startups that use techniques from gaming to enhance their services. Many use an awards system with points and leaderboards. Take FourSquare, with its badges and "Mayorships", which turns "going out" (to restaurants, bars, and other commercial establishments) into a game. Daily Burn uses game mechanics to help people lose weight. Mint.com is a service that basically turns personal finance into a game. The potential examples are almost infinite4.
Again, none of this is necessarily a "good" thing. If Target employees are gamed into checking out faster, are they sacrificing accuracy in the name of speed? What is actually gained by being the "mayor" of a bar in Foursquare? Indeed, many marketing schemes that revolve around the gamification of everything are essentially ways to "trick" customers or "exploit" psychology for profit. I don't really have a problem with this, but I do think it's an interesting trend, and its basis is the flow created by playing games.
On a more personal note, one thing I can't help but notice is that my latest hobby of homebrewing beer seems, at first glance, to be a poor flow activity. Or, at least, the feedback part of the process is not very good. When you brew a beer, you have to wait a few weeks after brew day to bottle or keg your beer, then you have to wait some time after that (less if you keg) before you can actually taste the beer to see how it came out (sure, you can drink the unfermented wort or the uncarbonated/unconditioned beer after primary fermentation, but that's not an exact measurement, and even then, you have to wait long periods of time). On the other hand, flow is an internal state of mind. The process of brewing the beer in the first place has many places for concentration and smaller bits of feedback. When I thought about it more, I feel like those three hours are, in themselves, something of a flow activity. The fact that I get to try it a few weeks/months later to see how it turned out is just an added bonus. Incidentally, the saison I brewed a few weeks ago? It seems to have turned out well - I think it's my best batch yet.
In case you can't tell, I really enjoyed this book, and as longwinded as this post turned out, there's a ton of great material in the book that I'm only touching on. I'll leave you with a quite that seems to sum things up pretty well (page 213): "Being in control of the mind means that literally anything that happens can be a source of joy."
1 - I guess it's a good thing that I'm writing this as opposed to speaking about it, as I have no idea how to pronounce any part of Mihaly Csikszentmihalyi's name.
2 - Which is not to take away the power of books or movies where you sit down, turn your brain off, and veg out for a while. Hey, I think True Blood is coming on soon...
3 - This is, of course, a massive simplification of a subject that we don't even really understand that well. My post on Interrupts and Context Switching goes into more detail, but even that is lacking in a truly detailed understanding of the conscious mind.
4 - I have to wonder how familiar Casinos are with these concepts. I'm not talking about the games of chance themselves, though that is also a good example of a flow activity (and you can see why gambling addiction could be a problem as a result). Take, for example, blackjack. The faster the dealer gets through a hand of blackjack, the higher the throughput of the table, and thus the more money a Casino would make. Casinos are all about probability, and the higher the throughput, the bigger their take. I seriously wonder if blackjack dealers are measured in some way (in terms of timing, not money).
Posted by Mark on July 10, 2011 at 07:44 PM .: link :.
Wednesday, May 25, 2011
How Boyd Wrote
I'm currently reading a biography of John Boyd, and in light of Sunday's post, I found a recent chapter particularly interesting. Boyd was a Fighter Pilot in the Air Force. He flew in Korea, made a real name for himself at Fighter Weapons School (which was later copied by the Navy - you may have heard of their version: Top Gun), and spent the latter part of his career working on groundbreaking strategic theories. He was an instructor at FWS for several years, and before leaving, he made his first big contributions to the Air Force. He wrote a tactics manual called Aerial Attack Study. Despite the passage of Vietnam and the Gulf War, nothing substantial has been added to it. It's served as the official tactics manual all over the world for over 40 years (actually, more like 50 at this point).
And Boyd almost didn't write it. Robert Coram (the author of the aforementioned biography) summarizes the unconventional manner in which the manual was written (on page 104 of my edition):
Boyd could not write the manual and continue flying and teaching; there simply wasn't enough time. Plus, the idea of sitting down at a desk and spending hundreds of hours writing a long document brought him to the edge of panic. He was a talker, not a writer. When he talked his ideas tumbled back and forth and he fed off the class and distilled his thoughts to the essence. But writing meant precision. And once on paper, the ideas could not be changed. ...It's a subject I didn't really cover much in my last post: the method of communication can impact the actual message. The way we communicate changes the way we think. Would Boyd's work have been as great if he didn't dictate it? Maybe, but it probably wouldn't have been the same.
Incidentally, I don't normally go in for biographies, but this is an excellent book so far. Part of that may be that Boyd is a genuinely interesting guy and that he was working on stuff that interests me, but I'm still quite enjoying myself.
Sunday, May 22, 2011
About two years ago (has it really been that long!?), I wrote a post about Interrupts and Context Switching. As long and ponderous as that post was, it was actually meant to be part of a larger series of posts. This post is meant to be the continuation of that original post and hopefully, I'll be able to get through the rest of the series in relatively short order (instead of dithering for another couple years). While I'm busy providing context, I should also note that this series was also planned for my internal work blog, but in the spirit of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Obviously, some of the specifics of my workplace have been removed from what follows, but it should still contain enough general value to be worthwhile.
In the previous post, I wrote about how computers and humans process information and in particular, how they handle switching between multiple different tasks. It turns out that computers are much better at switching tasks than humans are (for reasons belabored in that post). When humans want to do something that requires a lot of concentration and attention, such as computer programming or complex writing, they tend to work best when they have large amounts of uninterrupted time and can work in an environment that is quiet and free of distractions. Unfortunately, such environments can be difficult to find. As such, I thought it might be worth examining the source of most interruptions and distractions: communication.
Of course, this is a massive subject that can't even be summarized in something as trivial as a blog post (even one as long and bloviated as this one is turning out to be). That being said, it's worth examining in more detail because most interruptions we face are either directly or indirectly attributable to communication. In short, communication forces us to do context switching, which, as we've already established, is bad for getting things done.
Let's say that you're working on something large and complex. You've managed to get started and have reached a mental state that psychologists refer to as flow (also colloquially known as being "in the zone"). Flow is basically a condition of deep concentration and immersion. When you're in this state, you feel energized and often don't even recognize the passage of time. Seemingly difficult tasks no longer feel like they require much effort and the work just kinda... flows. Then someone stops by your desk to ask you an unrelated question. As a nice person and an accomodating coworker, you stop what you're doing, listen to the question and hopefully provide a helpful answer. This isn't necessarily a bad thing (we all enjoy helping other people out from time to time) but it also represents a series of context switches that would most likely break you out of your flow.
Not all work requires you to reach a state of flow in order to be productive, but for anyone involved in complex tasks like engineering, computer programming, design, or in-depth writing, flow is a necessity. Unfortunately, flow is somewhat fragile. It doesn't happen instantaneously; it requires a transition period where you refamiliarize yourself with the task at hand and the myriad issues and variables you need to consider. When your collegue departs and you can turn your attention back to the task at hand, you'll need to spend some time getting your brain back up to speed.
In isolation, the kind of interruption described above might still be alright every now and again, but imagine if the above scenario happened a couple dozen times in a day. If you're supposed to be working on something complicated, such a series of distractions would be disasterous. Unfortunately, I work for a 24/7 retail company and the nature of our business sometimes requires frequen interruptions and thus there are times when I am in a near constant state of context switching. Noe of this is to say I'm not part of the problem. I am certainly guilty of interrupting others, sometimes frequently, when I need some urgent information. This makes working on particularly complicated problems extremely difficult.
In the above example, there are only two people involved: you and the person asking you a question. However, in most workplace environments, that situation indirectly impacts the people around you as well. If they're immersed in their work, an unrelated conversation two cubes down may still break them out of their flow and slow their progress. This isn't nearly as bad as some workplaces that have a public address system - basically a way to interrupt hundreds or even thousands of people in order to reach one person - but it does still represent a challenge.
Now, the really insideous part about all this is that communication is really a good thing, a necessary thing. In a large scale organization, no one person can know everything, so communication is unavoidable. Meetings and phone calls can be indispensible sources of information and enablers of collaboration. The trick is to do this sort of thing in a way that interrupts as few people as possible. In some cases, this will be impossible. For example, urgency often forces disruptive communication (because you cannot afford to wait for an answer, you will need to be more intrusive). In other cases, there are ways to minimize the impact of frequent communication.
One way to minimize communication is to have frequently requested information documented in a common repository, so that if someone has a question, they can find it there instead of interrupting you (and potentially those around you). Naturally, this isn't quite as effective as we'd like, mostly because documenting information is a difficult and time consuming task in itself and one that often gets left out due to busy schedules and tight timelines. It turns out that documentation is hard! A while ago, Shamus wrote a terrific rant about technical documentation:
The stereotype is that technical people are bad at writing documentation. Technical people are supposedly inept at organizing information, bad at translating technical concepts into plain English, and useless at intuiting what the audience needs to know. There is a reason for this stereotype. It’s completely true.I don't think it's quite as bad as Shamus points out, mostly because I think that most people suffer from the same issues as technical people. Technology tends to be complex and difficult to explain in the first place, so it's just more obvious there. Technology is also incredibly useful because it abstracts many difficult tasks, often through the use of metaphors. But when a user experiences the inevitable metaphor shear, they have to confront how the system really works, not the easy abstraction they've been using. This descent into technical details will almost always be a painful one, no matter how well documented something is, which is part of why documentation gets short shrift. I think the fact that there actually is documentation is usually a rather good sign. Then again, lots of things aren't documented at all.
There are numerous challenges for a documentation system. It takes resources, time, and motivation to write. It can become stale and inaccurate (sometimes this can happen very quickly) and thus it requires a good amount of maintenance (this can involve numerous other topics, such as version histories, automated alert systems, etc...). It has to be stored somewhere, and thus people have to know where and how to find it. And finally, the system for building, storing, maintaining, and using documentation has to be easy to learn and easy to use. This sounds all well and good, but in practice, it's a nonesuch beast. I don't want to get too carried away talking about documentation, so I'll leave it at that (if you're still interested, that nonesuch beast article is quite good). Ultimately, documentation is a good thing, but it's obviously not the only way to minimize communication strain.
I've previously mentioned that computer programming is one of those tasks that require a lot of concentration. As such, most programmers abhor interruptions. Interestingly, communication technology has been becoming more and more reliant on software. As such, it should be no surprise that a lot of new tools for communication are asynchronous, meaning that the exchange of information happens at each participant's own convenience. Email, for example, is asynchronous. You send an email to me. I choose when I want to review my messages and I also choose when I want to respond. Theoretically, email does not interrupt me (unless I use automated alerts for new email, such as the default Outlook behavior) and thus I can continue to work, uninterrupted.
The aformentioned documentation system is also a form of asynchronous communication and indeed, most of the internet itself could be considered a form of documentation. Even the communication tools used on the web are mostly asynchronous. Twitter, Facebook, YouTube, Flickr, blogs, message boards/forums, RSS and aggregators are all reliant on asynchronous communication. Mobile phones are obviously very popular, but I bet that SMS texting (which is asynchronous) is used just as much as voice, if not moreso (at least, for younger people). The only major communication tools invented in the past few decades that wouldn't be asynchronous are instant messaging and chat clients. And even those systems are often used in a more asynchronous way than traditional speech or conversation. (I suppose web conferencing is a relatively new communication tool, though it's really just an extension of conference calls.)
The benefit of asynchronous communication is, of course, that it doesn't (or at least it shouldn't) represent an interruption. If you're immersed in a particular task, you don't have to stop what you're doing to respond to an incoming communication request. You can deal with it at your own convenience. Furthermore, such correspondence (even in a supposedly short-lived medium like email) is usually stored for later reference. Such records are certainly valuable resources. Unfortunately, asynchronous communication has it's own set of difficulties as well.
Miscommunication is certainly a danger in any case, but it seems more prominent in the world of asynchronous communication. Since there is no easy back-and-forth in such a method, there is no room for clarification and one is often left only with their own interpretation. Miscommunication is doubly challenging because it creates an ongoing problem. What could have been a single conversation has now ballooned into several asynchronous touch-points and even the potential for wasted work.
One of my favorite quotations is from Anne Morrow Lindbergh:
To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!It's difficult to beat the endless nuance of face-to-face communication, and for some discussions, nothing else will do. But as Lindbergh notes, communication is, in itself, a difficult proposition. Difficult, but necessary. About the best we can do is to attempt to minimize the misunderstanding.
I suppose one way to mitigate the possibility of miscommunication is to formalize the language in which the discussion is happening. This is easier said than done, as our friends in the legal department would no doubt say. Take a close look at a formal legal contract and you can clearly see the flaws in formal language. They are ostensibly written in English, but they require a lot of effort to compose or to read. Even then, opportunities for miscommunication or loopholes exist. Such a process makes sense when dealing with two separate organizations that each have their own agenda. But for internal collaboration purposes, such a formalization of communication would be disastrous.
You could consider computer languages a form of formal communication, but for most practical purposes, this would also fall short of a meaningful method of communication. At least, with other humans. The point of a computer language is to convert human thought into computational instructions that can be carried out in an almost mechanical fashion. While such a language is indeed very formal, it is also tedious, unintuitive, and difficult to compose and read. Our brains just don't work like that. Not to mention the fact that most of the communication efforts I'm talking about are the precursors to the writing of a computer program!
Despite all of this, a light formalization can be helpful and the fact that teams are required to produce important documentation practically requires a compromise between informal and formal methods of communication. In requirements specifications, for instance, I have found it quite beneficial to formally define various systems, acronyms, and other jargon that is referenced later in the document. This allows for a certain consistency within the document itself, and it also helps establish guidelines surrounding meaningful dialogue outside of the document. Of course, it wouldn't quite be up to legal standards and it would certainly lack the rigid syntax of computer languages, but it can still be helpful.
I am not an expert in linguistics, but it seems to me that spoken language is much richer and more complex than written language. Spoken language features numerous intricacies and tonal subtleties such as inflections and pauses. Indeed, spoken language often contains its own set of grammatical patterns which can be different than written language. Furthermore, face-to-face communication also consists of body language and other signs that can influence the meaning of what is said depending on the context in which it is spoken. This sort of nuance just isn't possible in written form.
This actually illustrates a wider problem. Again, I'm no linguist and haven't spent a ton of time examining the origins of language, but it seems to me that language emerged as a more immediate form of communication than what we use it for today. In other words, language was meant to be ephemeral, but with the advent of written language and improved technological means for recording communication (which are, historically, relatively recent developments), we're treating it differently. What was meant to be short-lived and transitory is now enduring and long-lived. As a result, we get things like the ever changing concept of political-correctness. Or, more relevant to this discussion, we get the aforementioned compromise between formal and informal language.
Another drawback to asynchronous communication is the propensity for over-communication. The CC field in an email can be a dangerous thing. It's very easy to broadcast your work out to many people, but the more this happens, the more difficult it becomes to keep track of all the incoming stimuli. Also, the language used in such a communication may be optimized for one type of reader, while the audience may be more general. This applies to other asynchronous methods as well. Documentation in a wiki is infamously difficult to categorize and find later. When you have an army of volunteers (as Wikipedia does), it's not as large a problem. But most organizations don't have such luxuries. Indeed, we're usually lucky if something is documented at all, let alone well organized and optimized.
The obvious question, which I've skipped over for most of this post (and, for that matter, the previous post), is: why communicate in the first place? If there are so many difficulties that arise out of communication, why not minimize such frivolities so that we can get something done?
Indeed, many of the greatest works in history were created by one mind. Sometimes, two. If I were to ask you to name the greatest inventor of all time, what would you say? Leonardo da Vinci or perhaps Thomas Edison. Both had workshops consisting of many helping hands, but their greatest ideas and conceptual integrity came from one man. Great works of literature? Shakespeare is the clear choice. Music? Bach, Mozart, Beethoven. Painting? da Vinci (again!), Rembrandt, Michelangelo. All individuals! There are collaborations as well, but usually only among two people. The Wright brothers, Gilbert and Sullivan, and so on.
So why has design and invention gone from solo efforts to group efforts? Why do we know the names of most of the inventors of 19th and early 20th century innovations, but not later achievements? For instance, who designed the Saturn V rocket? No one knows that, because it was a large team of people (and it was the culmination of numerous predecessors made by other teams of people). Why is that?
The biggest and most obvious answer is the increasing technological sophistication in nearly every area of engineering. The infamous Lazarus Long adage that "Specialization is for insects." notwithstanding, the amount of effort and specialization in various fields is astounding. Take a relatively obscure and narrow branch of mechanical engineering like Fluid Dynamics, and you'll find people devoting most of their life to the study of that field. Furthermore, the applications of that field go far beyond what we'd assume. Someone tinkering in their garage couldn't make the Saturn V alone. They'd require too much expertise in a wide and disparate array of fields.
This isn't to say that someone tinkering in their garage can't create something wonderful. Indeed, that's where the first personal computer came from! And we certainly know the names of many innovators today. Mark Zuckerberg and Larry Page/Sergey Brin immediately come to mind... but even their inventions spawned large companies with massive teams driving future innovation and optimization. It turns out that scaling a product up often takes more effort and more people than expected. (More information about the pros and cons of moving to a collaborative structure will have to wait for a separate post.)
And with more people comes more communication. It's a necessity. You cannot collaborate without large amounts of communication. In Tom DeMarco and Timothy Lister's book Peopleware, they call this the High-Tech Illusion:
...the widely held conviction among people who deal with any aspect of new technology (as who of us does not?) that they are in an intrinsically high-tech business. ... The researchers who made fundamental breakthroughs in those areas are in a high-tech business. The rest of us are appliers of their work. We use computers and other new technology components to develop our products or to organize our affairs. Because we go about this work in teams and projects and other tightly knit working groups, we are mostly in the human communication business. Our successes stem from good human interactions by all participants in the effort, and our failures stem from poor human interactions.(Emphasis mine.) That insight is part of what initially inspired this series of posts. It's very astute, and most organizations work along those lines, and thus need to figure out ways to account for the additional costs of communication (this is particularly daunting, as such things are notoriously difficult to measure, but I'm getting ahead of myself). I suppose you could argue that both of these posts are somewhat inconclusive. Some of that is because they are part of a larger series, but also, as I've been known to say, human beings don't so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Recognizing and acknowledging the problems introduced by collaboration and communication is vital to working on any large project. As I mentioned towards the beginning of this post, this only really scratches the surface of the subject of communication, but for the purposes of this series, I think I've blathered on long enough. My next topic in this series will probably cover the various difficulties of providing estimates. I'm hoping the groundwork laid in these first two posts will mean that the next post won't be quite so long, but you never know!
Posted by Mark on May 22, 2011 at 07:51 PM .: link :.
Wednesday, March 30, 2011
Nicholas Carr cracks me up. He's a skeptic of technology, and in particular, the internet. He's the the guy who wrote the wonderfully divisive article, Is Google Making Us Stupid? The funny thing about all this is that he seems to have gained the most traction on the very platform he criticizes so much. Ultimately, though, I think he does have valuable insights and, if nothing else, he does raise very interesting questions about the impacts of technology on our lives. He makes an interesting counterweight to the techno-geeks who are busy preaching about transhumanism and the singularity. Of course, in a very real sense, his opposition dooms him to suffer from the same problems as those he criticizes. Google and the internet may not be a direct line to godhood, but it doesn't represent a descent into hell either. Still, reading some Carr is probably a good way to put techno-evangelism into perspective and perhaps reach some sort of Hegelian synthesis of what's really going on.
Otakun recently pointed to an excerpt from Carr's latest book. The general point of the article is to examine how human memory is being conflated with computer memory, and whether or not that makes sense:
...by the middle of the twentieth century memorization itself had begun to fall from favor. Progressive educators banished the practice from classrooms, dismissing it as a vestige of a less enlightened time. What had long been viewed as a stimulus for personal insight and creativity came to be seen as a barrier to imagination and then simply as a waste of mental energy. The introduction of new storage and recording media throughout the last century—audiotapes, videotapes, microfilm and microfiche, photocopiers, calculators, computer drives—greatly expanded the scope and availability of “artificial memory.” Committing information to one’s own mind seemed ever less essential. The arrival of the limitless and easily searchable data banks of the Internet brought a further shift, not just in the way we view memorization but in the way we view memory itself. The Net quickly came to be seen as a replacement for, rather than just a supplement to, personal memory. Today, people routinely talk about artificial memory as though it’s indistinguishable from biological memory.While Carr is perhaps more blunt than I would be, I have to admit that I agree with a lot of what he's saying here. We often hear about how modern education is improved by focusing on things like "thinking skills" and "problem solving", but the big problem with emphasizing that sort of work ahead of memorization is that the analysis needed for such processes require a base level of knowledge in order to be effective. This is something I've expounded on at length in a previous post, so I won't rehash that here.
The interesting thing about the internet is that it enables you to get to a certain base level of knowledge and competence very quickly. This doesn't come without it's own set of challenges, and I'm sure Carr would be quick to point out that such a crash course would yield a false sense of security on us hapless internet users. After all, how do we know when we've reached that base level of confidence? Our incompetence could very well be masking our ability to recognize our incompetence. However, I don't think that's an insurmountable problem. Most of us that use the internet a lot view it as something of a low-trust environment, which can, ironically, lead to a better result. On a personal level, I find that what the internet really helps with is to determine just how much I don't know about a subject. That might seem like a silly thing to say, but even recognizing that your unknown unknowns are large can be helpful.
Some other assorted thoughts about Carr's excerpt:
Posted by Mark on March 30, 2011 at 06:06 PM .: link :.
Wednesday, February 02, 2011
I'm currently reading Cognitive Surplus: Creativity and Generosity in a Connected Age, by Clay Shirky. There seems to be a pattern emerging from certain pop-science books I've been reading in the past few years. Namely, a heavy reliance on fascinating anecdotes, counter-intuitive psychology experiments, and maybe a little behavioral economics thrown in for good measure. Cognitive Surplus most certainly fits the mold. Another book I've read recently, How We Decide by Jonah Lehrer, also fits. Most of Malcolm Gladwell's work does too (indeed, he's a master of the anecdote).
I don't think there's anything inherently wrong with this format. In fact, it can be quite entertaining and sometimes even informative. But sometimes I feel a bit uncomfortable with the conclusions that are drawn from all of this. Anecdotes, even well documented anecdotes, can make for great reading, but that doesn't necessarily make them broadly applicable. Generalizing or extrapolating from anecdotes can lead to some problematic conclusions. This is a difficult subject to tackle though, because humans seem to be hard wired to do exactly that. The human brain is basically a giant heuristic machine.
This is not a bad thing. Heuristics are an important part of human life because we usually don't always have all the information needed to use a more reliable, logical process. We all extrapolate from our own experiences; that is to say, we rely on anecdotal evidence in our daily lives all the time. It allows us to operate in situations which we do not understand.
Unfortunately, it's also subjective and not entirely reliable. The major issue is that it's rather easy to convince yourself that you have properly understand the problem, when in fact, you don't. In other words, our incompetence masks our ability to recognize our incompetence. As a result, we see things like Cargo Cults. Security beliefs and superstitions are also heuristics, albeit generally false ones. But they arise because producing such explanations are a necessary part of our life. We cannot explain everything we see, and since we often need to act on what we see, we must rely on less than perfect heuristics and processes.
So in a book like Cognitive Surplus, there's this instinctual impulse to agree with conclusions extrapolated from anecdotes, which is probably the source of my discomfort. It's not that I doubt the factual content of the anecdotes, it's that I'm not always sure how to connect the anecdote with the conclusion. In many cases, it seems like an intuitive leap, but as previously noted, this is a subjective process.
Of course, Shirky does not rely solely on anecdotal evidence in his book (nor do the other authors mentioned above). There are the aforementioned psychology experiments and behavioral economics studies that rely on the scientific notions of strictly controlled conditions and independent reproduction. The assumption is that conclusions extrapolated from this more scientific data are more reliable. But is it possible that they could suffer from the same problems as anecdotes?
Maybe. The data is almost always presented in an informal, summarized format (very similar, in fact, to the way anecdotes are formed), which can leave a lot of wiggle room. For instance, strictly controlled conditions necessary to run an experiment can yield qualifying factors that will make the results less broadly applicable than we may desire. I find this less troubling in cases where I'm already familiar with a study, such as the Ultimatum Game. It also helps that such a study has been independently reproduced countless times since it first appeared, and that many subsequent tests have refined various conditions and variables to see how the results would come out (and they all point in the expected direction).
Later in the book, Shirky references an economic study performed on 10 day-care centers in Haifa, Israel. I will not get into the details of the study (this post is not a review of Shirky's book, after all), except to say that it was a single study, performed in a narrow location, with a relatively small data set. I don't doubt the objective results, but unlike the Ultimatum Game, this study does not seem to have a long history of reproduction, nor did the researchers conduct obvious follow-up experiments (perhaps there are additional studies, but they are not referenced by Shirky). The results seem to violate certain economic assumptions we're all familiar with, but they are also somewhat intuitive when you realize why the results came out the way they did. On the other hand, how do we know why they came out that way? I'm virtually certain that if you vary one particular variable of the experiment, you'll receive the expected result. Then what?
I don't mean to imply that these books are worthless or that they don't contain valuable insights. I generally find them entertaining, helpful and informative, sometimes even persuasive. I like reading them. However, reading a book like this is not a passive activity. It's a dialogue. In other words, I don't think that Cognitive Surplus is the last word on the subjects that Shirky is writing about, despite a certain triumphal tone in his writing. It's important to recognize that there is probably more to this book than what is on the page. That's why there's a lengthly Notes section with references to numerous papers and studies for further reading and clarification. Cognitive Surplus raises some interesting questions and it proposes some interesting answers, but it's not the end of the conversation.
Update: I thought of a few books that I think are better about this sort of thing, and there's a commonality that's somewhat instructive. One example is The Paradox of Choice: Why More Is Less, by Barry Schwartz. Another is Flow: The Psychology of Optimal Experience by Mihaly Csikszentmihalyi. The interesting thing about both of these books is that they are written by researchers who have conducted a lot of the research themselves. Both of them are very careful in the way they phrase their conclusions, making sure to point out qualifying factors, etc... Shirky, Gladwell, etc... seem to be summarizing the work of others. This is also valuable, in its own way, but perhaps less conclusive? (Then again, correlation does not necessarily mean causation. This update basically amounts to heuristic, and one based on the relatively small sample of pop-science books I've read, so take it with a grain of salt.)
Again Update: I wrote this post before finishing Cognitive Surplus. I'm now finished, and in the last chapter, Shirky notes (pages 191-192):
The opportunity we collectively share, though, is much larger than even a book's worth of examples can express, because those examples, and especially the ones that involve significant cultural disruption, could turn out to be special cases. As with previous revolutions driven by technology - whether it is the rise of literate and scientific culture with the spread of the printing press or the economic and social globalization that followed the invention of the telegraph - what matters now is not the new capabilities that we have, but how we turn those capabilities, both technical and social, into opportunities.In short, I think Shirky is acknowledging what was making me uncomfortable throughout the book: anecdotes and examples can't paint the whole picture. Shirky's book is not internet triumphalism, but a call to action. I suppose you could argue that even the assertion that these opportunities exist at all is a form of triumphalism, but I don't think so.
Posted by Mark on February 02, 2011 at 08:27 PM .: link :.
Sunday, November 21, 2010
Adventures in Brewing - Part 2: The Bottling
A couple of weeks ago, I started brewing an English Brown Ale. After two weeks in the fermenter, I went ahead and bottled the beer this weekend. Just another couple of weeks in the bottle to condition, and they should be ready to go (supposedly, the impatient can try it after a week, which I might have to do, just to see what it's like and how it ages).
The final gravity ended up at around 1.008, so if my calculations (and my hydrometer readings, which are probably more approximate than I'd like) are correct, this should yield something around 4.5% alcohol. Both my hydrometer readings were a bit low according to the worksheet/recipe I was using, but that ABV is right in the middle of the range. I suspect this means there won't be as much sugar in the beer and thus the taste will be a bit less powerful, but I guess we'll find out.
I ended up with a little more than a case and a half of bottled beer, which is probably a bit low. I was definitely overcautious about racking the beer to my bottling bucket. Not wanting to transfer any yeast and never having done it before, I was a little too conservative in stopping the siphoning process (which was a lot easier and faster than I was expecting - just add the priming sugar and get the siphon started and it only took a few minutes to transfer the grand majority of the beer to the bottling bucket). Next time I should be able to get around two full cases out of a 5 gallon batch.
Once in the bottling bucket, the process went pretty smoothly, and I actually found filling the bottles up and capping them to be pretty fun (the bottling wand seems like a life saver - I'd hate to do this with just a tube). Once I got towards the bottom of the bucket, it was a bit of a challenge to get as much out of there as possible without oxidizing the beer too much. I managed to get myself a quick cup of the beer and took a few sips. Of course, it was room temperature and not carbonated enough (carbonation happens in the bottle, thanks to the priming sugar), but it sure was beer. I didn't detect anything "off" about the taste, and it smelled pretty good too. Maybe I managed to not screw it up!
Siphoning the beer
(Cross posted at the Kaedrin Beer Blog, along with some other stuff posted today)
Posted by Mark on November 21, 2010 at 07:04 PM .: link :.
Wednesday, November 10, 2010
Earlier in the year, I had noticed a pile of books building up on the shelf and have made a concerted effort to get through them. This has gone smoothly at times, and at other times it's ground to a halt. Then there's the fact that I can't seem to stop buying new books to read. Case in point, during the Six Weeks of Halloween, I thought it might be nice to read some horror, and realized that most of what I had on my shelf was science fiction, fantasy, detective fiction, or non-fiction (history, technology, biography, etc...) So I went out and picked up a collection of Richard Matheson short stories called Button, Button (the title story was the source material for a very loose film adaptation, The Box).
It was a very interesting collection of stories, many of which play on variations of the moral dilemma most famous in the title story, Button, Button:
"If you push the button," Mr Steward told him, "somewhere in the world, someone you don't know will die. In return for which you will receive fifty thousand dollars."In the film adaptation, the "reward" was raised to a million dollars, but then, they also added a ton of other stuff to what really amounts for a tight, 12 page story. Anyway, there are lots of other stories, most containing some sort of moral dilemma along those lines (or someone exploiting such a dilemma). In particular, I enjoyed A Flourish of Strumpets and No Such Thing as a Vampire, but I found myself most intrigued by one of the longer stories, titled Mute. I suppose mild spoilers ahead, if this is something you think you might want to read.
The story concerns a child named Paal. His parents were recent immigrants and he was homeschooled, but his parents died in a fire, leaving Paal to the care of the local Sheriff and his wife. Paal is a mute, and the community is quite upset by this. Paal ends up being sent to school, but his seeming lack of communication skills cause issues, and the adults continually attempt to get Paal to talk.
I will leave it at that for now, but if you're at all familiar with Matheson, you can kinda see where this was going. What struck me most was how much a sign of the times this story was. Of course, all art is a product of its cultural and historical context, but for horror stories, that must be doubly so. Most of the stories in this collection were written and published in the 1950s and early 1960s, which I find interesting. With respect to this story, it's primarily about the crushing pressure of conformity, something that was surely on Matheson's mind after having just finished of the uniformity of the 1950s. The cultural norms of the 50s were perhaps overly traditional, but after having witnessed the deadliest conflict in human history in the 1940s, you can hardly blame people for wanting some semblance of tradition and stability in their lives. Of course, that sort of uniformity isn't really natural evil, and like a pendulum, things swing from one extreme to the other, until eventually things settle down. Or not.
Anyway, writing in the early 60s (or maybe even the late 50s), Matheson was clearly disturbed by the impulse to force conformity, and Mute is a clear expression of this anxiety. Interestingly, the story is almost as horrific in today's context, but for different reasons. Matheson was writing in response to a society that had been emphasizing conformity and had no doubt witness such abuses himself. Interestingly, the end of the story is somewhat bittersweet. It's not entirely tragic, and it's almost an acknowledgement that conformity isn't necessarily evil.
It was not something easily judged, he was thinking. There was no right or wrong of it. Definitely, it was not a case of evil versus good. Mrs. Wheeler, the sheriff, the boy's teacher, the people of German Corners - they had, probably, all meant well. Understandably, they had been outraged at the idea of a seven-year-old boy not having been taught to speak by his parents. Their actions were, in light of that, justifiable and good.In today's world, we see the opposite of the 1950s in many ways. Emphasis is no longer placed on conformity (well, perhaps it still is in some places), but rather a rugged individuality. There are no one-size fits all pieces of culture anymore. We've got hundreds of varieties of spaghetti sauce, thousands of music choices that can fit on a device the size of a business card, movies that are designed to appeal to small demographics, and so on. We deal with problems like the paradox of choice, and the internet has given rise to the niche and concepts like the Long Tail. Of course, rigid non-conformity is, in itself, a form of conformity, but I can't imagine a story like Mute being written in this day and age. A comparable story would be about how lost someone becomes when they don't conform to societal norms...
Sunday, September 05, 2010
Another edition of Tasting Notes, a series of quick hits on a variety of topics that don't really warrant a full post. So here's what I've been watching/playing/reading/drinking lately:
Wednesday, August 04, 2010
A/B Testing Spaghetti Sauce
Earlier this week I was perusing some TED Talks and ran across this old (and apparently popular) presentation by Malcolm Gladwell. It struck me as particularly relevant to several topics I've explored on this blog, including Sunday's post on the merits of A/B testing. In the video, Gladwell explains why there are a billion different varieties of Spaghetti sauce at most supermarkets:
The key insight Gladwell discusses in his video is basically the destruction of the Platonic Ideal (I'll summarize in this paragraph in case you didn't watch the video, which covers the topic in much more depth). He talks about Howard Moskowitz, who was a market research consultant with various food industry companies that were attempting to optimize their products. After conducting lots of market research and puzzling over the results, Moskowitz eventually came to a startling conclusion: there is no perfect product, only perfect products. Moskowitz made his name working with spaghetti sauce. Prego had hired him in order to find the perfect spaghetti sauce (so that they could compete with rival company, Ragu). Moskowitz developed dozens of prototype sauces and went on the road, testing each variety with all sorts of people. What he found was that there was no single perfect spaghetti sauce, but there were basically three types of sauce that people responded to in roughly equal proportion: standard, spicy, and chunky. At the time, there were no chunky spaghetti sauces on the market, so when Prego released their chunky spaghetti sauce, their sales skyrocketed. A full third of the market was underserved, and Prego filled that need.
Decades later, this is hardly news to us and the trend has spread from the supermarket into all sorts of other arenas. In entertainment, for example, we're seeing a move towards niches. The era of huge blockbuster bands like The Beatles is coming to an end. Of course, there will always be blockbusters, but the really interesting stuff is happening in the niches. This is, in part, due to technology. Once you can fit 30,000 songs onto an iPod and you can download "free" music all over the internet, it becomes much easier to find music that fits your tastes better. Indeed, this becomes a part of peoples' identity. Instead of listening to the mass produced stuff, they listen to something a little odd and it becomes an expression of their personality. You can see evidence of this everywhere, and the internet is a huge enabler in this respect. The internet is the land of niches. Click around for a few minutes and you can easily find absurdly specific, single topic, niche websites like this one where every post features animals wielding lightsabers or this other one that's all about Flaming Garbage Cans In Hip Hop Videos (there are thousands, if not millions of these types of sites). The internet is the ultimate paradox of choice, and you're free to explore almost anything you desire, no matter how odd or obscure it may be (see also, Rule 34).
In relation to Sunday's post on A/B testing, the lesson here is that A/B testing is an optimization tool that allows you to see how various segments respond to different versions of something. In that post, I used an example where an internet retailer was attempting to find the ideal imagery to sell a diamond ring. A common debate in the retail world is whether that image should just show a closeup of the product, or if it should show a model wearing the product. One way to solve that problem is to A/B test it - create both versions of the image, segment visitors to your site, and track the results.
As discussed Sunday, there are a number of challenges with this approach, but one thing I didn't mention is the unspoken assumption that there actually is an ideal image. In reality, there are probably some people that prefer the closeup and some people who prefer the model shot. An A/B test will tell you what the majority of people like, but wouldn't it be even better if you could personalize the imagery used on the site depending on what customers like? Show the type of image people prefer, and instead of catering to the most popular segment of customer, you cater to all customers (the simple diamond ring example begins to break down at this point, but more complex or subtle tests could still show significant results when personalized). Of course, this is easier said than done - just ask Amazon, who does CRM and personalization as well as any retailer on the web, and yet manages to alienate a large portion of their customers every day! Interestingly, this really just shifts the purpose of A/B testing from one of finding the platonic ideal to finding a set of ideals that can be applied to various customer segments. Once again we run up against the need for more and better data aggregation and analysis techniques. Progress is being made, but I'm not sure what the endgame looks like here. I suppose time will tell. For now, I'm just happy that Amazon's recommendations aren't completely absurd for me at this point (which I find rather amazing, considering where they were a few years ago).
Posted by Mark on August 04, 2010 at 07:54 PM .: link :.
Sunday, August 01, 2010
Groundhog Day and A/B Testing
Jeff Atwood recently made a fascinating observation about the similarities between the classic film Groundhog Day and A/B Testing.
In case you've only recently emerged from a hermit-like existence, Groundhog Day is a film about Phil (played by Bill Murray). It seems that Phil has been doomed (or is it blessed) to live the same day over and over again. It doesn't seem to matter what he does during this day, he always wakes up at 6 am on Groundhog Day. In the film, we see the same day repeated over and over again, but only in bits and pieces (usually skipping repetitive parts). The director of the film, Harold Ramis, believes that by the end of the film, Phil has spent the equivalent of about 30 or 40 years reliving that same day.
Towards the beginning of the film, Phil does a lot of experimentation, and Atwood's observation is that this often takes the form of an A/B test. This is a concept that is perhaps a little more esoteric, but the principles are easy. Let's take a simple example from the world of retail. You want to sell a new ring on a website. What should the main image look like? For simplification purposes, let's say you narrow it down to two different concepts: one, a closeup of the ring all by itself, and the other a shot of a model wearing the ring. Which image do you use? We could speculate on the subject for hours and even rationalize some pretty convincing arguments one way or the other, but it's ultimately not up to us - in retail, it's all about the customer. You could "test" the concept in a serial fashion, but ultimately the two sets of results would not be comparable. The ring is new, so whichever image is used first would get an unfair advantage, and so on. The solution is to show both images during the same timeframe. You do this by splitting your visitors into two segments (A and B), showing each segment a different version of the image, and then tracking the results. If the two images do, in fact, cause different outcomes, and if you get enough people to look at the images, it should come out in the data.
This is what Phil does in Groundhog Day. For instance, Phil falls in love with Rita (played by Andie MacDowell) and spends what seems like months compiling lists of what she likes and doesn't like, so that he can construct the perfect relationship with her.
Phil doesn't just go on one date with Rita, he goes on thousands of dates. During each date, he makes note of what she likes and responds to, and drops everything she doesn't. At the end he arrives at -- quite literally -- the perfect date. Everything that happens is the most ideal, most desirable version of all possible outcomes on that date on that particular day. Such are the luxuries afforded to a man repeating the same day forever.As Atwood notes, the interesting thing about this process is that even once Phil has constructed that perfect date, Rita still rejects Phil. From this example and presumably from experience with A/B testing, Atwood concludes that A/B testing is empty and that subjects can often sense a lack of sincerity behind the A/B test.
It's an interesting point, but to be sure, I'm not sure it's entirely applicable in all situations. Of course, Atwood admits that A/B testing is good at smoothing out details, but there's something more at work in Groundhog's Day that Atwood is not mentioning. Namely, that Phil is using A/B testing to misrepresent himself as the ideal mate for Rita. Yes, he's done the experimentation to figure out what "works" and what doesn't, but his initial testing was ultimately shallow. Rita didn't reject him because he had all the right answers, she rejected him because he was attempting to deceive her. His was misrepresenting himself, and that certainly can lead to a feeling of emptiness.
If you look back at my example above about the ring being sold on a retail website, you'll note that there's no deception going on there. Somehow I doubt either image would result in a hollow feeling by the customer. Why is this different than Groundhog Day? Because neither image misrepresents the product, and one would assume that the website is pretty clear about the fact that you can buy things there. Of course, there are a million different variables you could test (especially once you get into text and marketing hooks, etc...) and some of those could be more deceptive than others, but most of the time, deception is not the goal. There is a simple choice to be made, instead of constantly wondering about your product image and second guessing yourself, why not A/B test it and see what customers like better?
There are tons of limitations to this approach, but I don't think it's as inherently flawed as Atwood seems to believe. Still, the data you get out of an A/B test isn't always conclusive and even if it is, whatever learnings you get out of it aren't necessarily applicable in all situations. For instance, what works for our new ring can't necessarily be applied to all new rings (this is a problem for me, as my employer has a high turnover rate for products - as such, the simple example of the ring as described above would not be a good test for my company unless the ring would be available for a very long time). Furthermore, while you can sometimes pick a winner, it's not always clear why it's a winner. This is especially the case when the differences between A and B are significant (for instance, testing an entirely redesigned page might yield results, but you will not know which of the changes to the page actually caused said results - on the other hand, A/B testing is really the only way to accurately calculate ROI on significant changes like that.)
Obviously these limitations should be taken into account when conducting an A/B test, and I think what Phil runs into in Groundhog's Day is a lack of conclusive data. One of the problems with interpreting inconclusive data is that it can be very tempting to rationalize the data. Phils initial attempts to craft the perfect date for Rita fail because he's really only scraping the surface of her needs and desires. In other words, he's testing the wrong thing, misunderstanding the data, and thus getting inconclusive results.
The interesting thing about the Groundhog's Day example is that, in the end, the movie is not a condemnation of A/B testing at all. Phil ultimately does manage to win the affections of Rita. Of course it took him decades to do so, and that's worth taking into account. Perhaps what the film is really saying is that A/B testing is often more complicated than it seems and that the only results you get depend on what you put into it. A/B testing is not the easy answer it's often portrayed as and it should not be the only tool in your toolbox (i.e. forcing employees to prove that using 3, 4 or 5 pixels for a border is ideal is probably going a bit too far ), but neither is it as empty as Atwood seems to be indicating. (And we didn't even talk about multivariate tests! Let's get Christopher Nolan on that. He'd be great at that sort of movie, wouldn't he?)
Wednesday, July 14, 2010
So Nick from CHUD recently revived the idea of a "Tasting Notes..." post that features a bunch of disconnected, scattershot notes on a variety of topics that don't really warrant a full post. It sounds like fun, so here are a few tasting notes...
Sunday, July 04, 2010
Noted documentary filmmaker Errol Morris has been writing a series of posts about incompetence for the NY Times. The most interesting parts feature an interview with David Dunning, a psychologist whose experiments have discovered what's called the Dunning-Kruger Effect: our incompetence masks our ability to recognize our incompetence.
DAVID DUNNING: There have been many psychological studies that tell us what we see and what we hear is shaped by our preferences, our wishes, our fears, our desires and so forth. We literally see the world the way we want to see it. But the Dunning-Kruger effect suggests that there is a problem beyond that. Even if you are just the most honest, impartial person that you could be, you would still have a problem — namely, when your knowledge or expertise is imperfect, you really don’t know it. Left to your own devices, you just don’t know it. We’re not very good at knowing what we don’t know.I found this interesting in light of my recent posting about universally self-affirming outlooks (i.e. seeing the world the way we want to see it). In any case, the interview continues:
ERROL MORRIS: Knowing what you don’t know? Is this supposedly the hallmark of an intelligent person?It may be smart and modest, but that sort of thing usually gets politicians in trouble. But most people aren't politicians, and so it's worth looking into this concept a little further. An interesting result of this effect is that a lot of the smartest, most intelligent people also tend to be somewhat modest (this isn't to say that they don't have an ego or that they can't act in arrogant ways, just that they tend to have a better idea about how much they don't know). Steve Schwartz has an essay called No One Knows What the F*** They’re Doing (or “The 3 Types of Knowledge”) that explores these ideas in some detail:
To really understand how it is that no one knows what they’re doing, we need to understand the three fundamental categories of information.Schwartz has a series of very helpful charts that illustrate this, but most people drastically overestimate the amount of knowledge in the "shit you know" category. In fact, that's the smallest category and it is dwarfed b the shit you know you don’t know category, which is, in itself, dwarfed by the shit you don’t know you don’t know. The result is that most people who receive a lot of praise or recognition are surprised and feel a bit like a fraud.
This is hardly a new concept, but it's always worth keeping in mind. When we learn something new, we've gained some knowledge. We've put some information into the "shit we know" category. But more importantly, we've probably also taken something out of the "shit we don't know that we don't know" category and put it into the "shit we know that we don't know" category. This is important because that unknown unknowns category is the most dangerous of the categories, not the least of which is that our ignorance prevents us from really exploring it. As mentioned at the beginning of this post, our incompetence masks our ability to recognize our incompetence. In the interview, Morris references a short film he did once:
ERROL MORRIS: And I have an interview with the president of the Alcor Life Extension Foundation, a cryonics organization, on the 6 o’clock news in Riverside, California. One of the executives of the company had frozen his mother’s head for future resuscitation. (It’s called a “neuro,” as opposed to a “full-body” freezing.) The prosecutor claimed that they may not have waited for her to die. In answer to a reporter’s question, the president of the Alcor Life Extension Foundation said, “You know, we’re not stupid . . . ” And then corrected himself almost immediately, “We’re not that stupid that we would do something like that.”One might be tempted to call this a cynical outlook, but what it basically amounts to is that there's always something new to learn. Indeed, the more we learn, the more there is to learn. Now, if only we could invent the technology like what's presented in Diaspora (from my previous post), so we can live long enough to really learn a lot about the universe around us...
Wednesday, June 23, 2010
Internalizing the Ancient
Otaku Kun points to a wonderful entry in the Astronomy Picture of the Day series:
I think it’s impossible to really relate to things beyond human timescales. The idea of something being “ancient” has no meaning if it predates our human comprehension. The Neanderthals disappeared 30,000 years ago, which is probably really the farthest back we can reflect on. When we start talking about human forebears of 100,000 years ago and more, it becomes more abstract - that’s why it’s no coincidence that the Battlestar Galactica series finale set the events 150,000 years ago, well beyond even the reach of mythological narrative.I'm reminded of an essay by C. Northcote Parkinson, called High Finance or The Point of Vanishing Interest (the essay appears in Parkinson's Law, a collection of essays). Parkinson writes about how finance committees work:
People who understand high finance are of two kinds: those who have vast fortunes of their own and those who have nothing at all. To the actual millionaire a million dollars is something real and comprehensible. To the applied mathematician and the lecturer in economics (assuming both to be practically starving) a million dollars is at least as real as a thousand, they having never possessed either sum. But the world is full of people who fall between these two categories, knowing nothing of millions but well accustomed to think in thousands, and it is these that finance committees are mostly comprised.He then goes on to explore what he calls the "Law of Triviality". Briefly stated, it means that the time spent on any item of the agenda will be in inverse proportion to the sum involved. Thus he concludes, after a number of humorous but fitting examples, that there is a point of vanishing interest where the committee can no longer comment with authority. Astonishingly, the amount of time that is spent on $10 million and on $10 may well be the same. There is clearly a space of time which suffices equally for the largest and smallest sums.
In short, it's difficult to internalize numbers that high, whether we're talking about large sums of money or cosmic timescales. Indeed, I'd even say that Parkinson was being a bit optimistic. Millionaires and mathematicians may have a better grasp on the situation than most, but even they are probably at a loss when we start talking about cosmic timeframes. OK also mentions Battlestar Galactica, which did end on an interesting note (even if that finale was quite disappointing as a whole) and which brings me to one of the reasons I really enjoy science fiction: the contemplation of concepts and ideas that are beyond comprehension. I can't really internalize the cosmic information encoded in the universe around me in such a way to do anything useful with it, but I can contemplate it and struggle to understand it, which is interesting and valuable in its own right. Perhaps someday, we will be able to devise ways to internalize and process information on a cosmic scale (this sort of optimistic statement perhaps represents another reason I enjoy SF).
Sunday, May 30, 2010
Someone sent me a note about a post I wrote on the 4th Kingdom boards in 2005 (August 3, 2005, to be more precise). It was in a response to a thread about technology and consumer electronics trends, and the original poster gave two examples that were exploding at the times: "camera phones and iPods." This is what I wrote in response:
Heh, I think the next big thing will be the iPod camera phone. Or, on a more general level, mp3 player phones. There are already some nifty looking mp3 phones, most notably the Sony/Ericsson "Walkman" branded phones (most of which are not available here just yet). Current models are all based on flash memory, but it can't be long before someone releases something with a small hard drive (a la the iPod). I suspect that, in about a year, I'll be able to hit 3 birds with one stone and buy a new cell phone with an mp3 player and digital camera.For an off-the-cuff informal response, I think I did pretty well. Of course, I still got a lot of the specifics wrong. For instance, I'm pretty clearly talking about the iPhone, though that would have to wait about 2 years before it became a reality. I also didn't anticipate the expansion of flash memory to more usable sizes and prices. Though I was clearly talking about a convergence device, I didn't really say anything about what we now call "apps".
In terms of game consoles, I didn't really say much. My first thought upon reading this post was that I had completely missed the boat on the Wii, however, it appears that the Wii's new controller scheme wasn't shown until September 2005 (about a month after my post). I did manage to predict a winner in the HD video war though, even if I framed the prediction as a "high capacity DVD war" and spelled blu-ray wrong.
I'm not generally good at making predictions about this sort of thing, but it's nice to see when I do get things right. Of course, you could make the argument that I was just stating the obvious (which is basically what I did with my 2008 predictions). Then again, everything seems obvious in hindsight, so perhaps it is still a worthwhile exercise for me. If nothing else, it gets me to think in ways I'm not really used to... so here are a few predictions for the rest of this year:
Posted by Mark on May 30, 2010 at 09:00 PM .: link :.
Wednesday, March 10, 2010
Blast from the Past
A coworker recently unearthed a stash of a publication called The Net, a magazine published circa 1997. It's been an interesting trip down memory lane. In no particular order, here are some thoughts about this now defunct magazine.
Posted by Mark on March 10, 2010 at 07:19 PM .: link :.
Wednesday, December 30, 2009
More on Visual Literacy
In response to my post on Visual Literacy and Rembrandt's J'accuse, long-time Kaedrin friend Roy made some interesting comments about director Peter Greenaway's insistence that our ability to analyze visual art forms like paintings is ill-informed and impoverished.
It depends on what you mean by visually illiterate, I guess. Because I think that the majority of people are as visually literate as they are textually literate. What you seem to be comparing is the ability to read into a painting with the ability to read words, but that's not just reading, you're talking about analyzing and deconstructing at that point. I mean, most people can watch a movie or look at a picture and do some basic contextualizing. ... It's not for lack of literacy, it's for lack of training. You know how it is... there's reading, and then there's Reading. Most people in the United States know how to read, but that doesn't mean that they know how to Read. Likewise with visual materials--most people know how to view a painting, they just don't know how to View a Painting. I don't think we're visually illiterate morons, I just think we're only superficially trained.I mostly agree with Roy, and I spent most of my post critiquing Greenaway's film for similar reasons. However, I find the subject of visual literacy interesting. First, as Roy mentions, it depends on how you define the phrase. When we hear the term literacy, we usually mean the ability to read and write, but there's also a more general definition of being educated or having knowledge within a particular subject or field (i.e. computer literacy or in our case, visual literacy). Greenaway is clearly emphasizing the more general definition. It's not that he thinks we can't see a painting, it's that we don't know enough about the context of the paintings we are viewing.
Roy is correct to point out that most people actually do have relatively sophisticated visual skills:
Even when people don't have the vocabulary or training, they still pick up on things, because I think we use symbols and visual language all the time. We read expressions and body language really well, for example. Almost all of our driving rules are encoded first and foremost as symbols, not words--red=stop, green=go, yellow=caution. You don't need "Stop" or "Yield" on the sign to know which it is--the shape of the sign tells you.Those are great examples of visual encoding and conventions, but do they represent literacy? Why does a stop sign represent what it does? There are three main components to the stop sign:
However, it's worth noting that the clear meaning of a stop sign is also due to the fact that it's a near universal convention used throughout the entire world. Not all traffic signals are as well defined. Case in point, what does a blinking green traffic light represent? Blinking red means to "stop, then proceed with caution" (kinda like a stop sign). Blinking yellow means to "slow down and proceed with caution." So what does a blinking green mean? James Grimmelmann tried to figure it out:
It turns out (courtesy of the ODP and rec.travel), perhaps unsurpsingly, that there is no uniform agreement on the meaning of a blinking green light. In a bunch of Canadian provinces, it has the same general meaning that a regular green light does, with the added modifier that you are the undisputed master of all you survey. All other traffic entering the intersection has a stop sign or a red light, and must bow down before your awesome cosmic powers. On the other hand, if you're in Massachusetts or British Columbia and you try a no-look Ontario-style left turn on a blinking green, you're liable to get into a smackup, since the blinking green means only that cross traffic is seeing red, with no guarantees about oncoming traffic.Now, maybe it's just because we're starting to get obscure and complicated here, but the reason traffic signals work is because we've established a set of conventions that are similar most everywhere. But when we mess around with them or get too complicated, it could be a problem. Luckly, we don't do that sort of thing very often (even the blinking green example is probably vanishingly obscure - I've never seen or even heard of that happening until reading James' post). These conventions are learned, usually through simple observation, though we also regulate who can drive and require people to study the rules of driving (including signs and lights) before granting a license.
Another example, perhaps surprising because it is something primarily thought of as a textual medium, is newspapers. Take a look at this front page of a newspaper1 :
Newspapers use numerous techniques (such as prominence, grouping, and nesting) to establish a visual hierarchy, allowing readers to scan the page to find what stories they want to read. In the image above, the size of the headline (Victory!) as well as its placement on the page makes it clear at a glance that this is the most important story. The headline "Miami Police Department Unveils New Pastel Pink and Aqua Uniforms" spans three columns of text, making it obvious that they're all part of the same story. Furthermore, we know the picture of Crockett and Tubbs goes with the same story because both the picture and the text are spanned by the same headline. And so on.
Now I know what my younger readers2 are thinking: What the fuck is this "newspaper" thing you're babbling about? Well, it turns out that a lot of the same conventions apply to the web. There are, of course, new conventions on the web (for instance, links are usually represented by different colored text that is also underlined), but many of the same techniques are used to establish a visual hierarchy on the web.
What's more interesting about newspapers and the web is that we aren't really trained how to read them, but we figure it out anyway. In his excellent book on usability, Don't Make Me Think, Steve Krug writes:
At some point in our youth, without ever being taught, we all learned to read a newspaper. Not the words, but the conventions.The tricky part about this is that the learning seems to happen subconsciously. Large type is pretty obvious, but column spanning? Captions? Nesting? Some of this stuff gets pretty subtle, and for the most part, people don't care. They just scan the page, find what they want, and read the story. It's just intuitive.
But designing a layout is not quite as intuitive. Many of the lessons we have internalized in reading a newspaper (or a website) aren't really available to us in a situation where we're asked to design a layout. If you want a good example of this, look at web pages designed in the mid-90s. By now, we've got blogs and mini-CMS style systems that automate layouts and take design out of most people's hands.
So, does Greenaway have a valid point? Or is Roy right? Obviously, we all process visual information, and visual symbolism is frequently used to encode large amounts of information into a relatively small space. Does that make us visually literate? I guess it all comes down to your definition of literate. Roy seems to take the more specific definition of "able to read or write" while Greenaway seems to be more concerned with "education or knowledge in a specified field." The question then becomes, are we more textually literate than we are visually literate? Greenaway certainly seems to think so. Roy seems to think we're just about equal on both fronts. I think both positions are defensible, especially when you consider that Greenaway is talking specifically about art. Furthermore, his movie is about a classical painting that was created several centuries ago. For most young people today, art is more diffuse. When you think about it, almost anything can be art. I suspect Greenaway would be disgusted by that sort of attitude, which is perhaps another way to view his thoughts on visual literacy.
1 - Yeah, it's the Onion and not a real newspaper per say, but it's fun and it's representative of common newspaper conventions.
2 - Hahaha, as if I have more than 5 readers, let alone any young readers.
Sunday, June 28, 2009
Interrupts and Context Switching
To drastically simplify how computers work, you could say that computers do nothing more that shuffle bits (i.e. 1s and 0s) around. All computer data is based on these binary digits, which are represented in computers as voltages (5 V for a 1 and 0 V for a 0), and these voltages are physically manipulated through transistors, circuits, etc... When you get into the guts of a computer and start looking at how they work, it seems amazing how many operations it takes to do something simple, like addition or multiplication. Of course, computers have gotten a lot smaller and thus a lot faster, to the point where they can perform millions of these operations per second, so it still feels fast. The processor is performing these operations in a serial fashion - basically a single-file line of operations.
This single-file line could be quite inefficent and there are times when you want a computer to be processing many different things at once, rather than one thing at a time. For example, most computers rely on peripherals for input, but those peripherals are often much slower than the processor itself. For instance, when a program needs some data, it may have to read that data from the hard drive first. This may only take a few milliseconds, but the CPU would be idle during that time - quite inefficient. To improve efficiency, computers use multitasking. A CPU can still only be running one process at a time, but multitasking gets around that by scheduling which tasks will be running at any given time. The act of switching from one task to another is called Context Switching. Ironically, the act of context switching adds a fair amount of overhead to the computing process. To ensure that the original running program does not lose all its progress, the computer must first save the current state of the CPU in memory before switching to the new program. Later, when switching back to the original, the computer must load the state of the CPU from memory. Fortunately, this overhead is often offset by the efficiency gained with frequent context switches.
If you can do context switches frequently enough, the computer appears to be doing many things at once (even though the CPU is only processing a single task at any given time). Signaling the CPU to do a context switch is often accomplished with the use of a command called an Interrupt. For the most part, the computers we're all using are Interrupt driven, meaning that running processes are often interrupted by higher-priority requests, forcing context switches.
This might sound tedious to us, but computers are excellent at this sort of processing. They will do millions of operations per second, and generally have no problem switching from one program to the other and back again. The way software is written can be an issue, but the core functions of the computer described above happen in a very reliable way. Of course, there are physical limits to what can be done with serial computing - we can't change the speed of light or the size of atoms or a number of other physical constraints, and so performance cannot continue to improve indefinitely. The big challenge for computers in the near future will be to figure out how to use parallel computing as well as we now use serial computing. Hence all the talk about Multi-core processing (most commonly used with 2 or 4 cores).
Parallel computing can do many things which are far beyond our current technological capabilities. For a perfect example of this, look no further than the human brain. The neurons in our brain are incredibly slow when compared to computer processor speeds, yet we can rapidly do things which are far beyond the abilities of the biggest and most complex computers in existance. The reason for that is that there are truly massive numbers of neurons in our brain, and they're all operating in parallel. Furthermore, their configuration appears to be in flux, frequently changing and adapting to various stimuli. This part is key, as it's not so much the number of neurons we have as how they're organized that matters. In mammals, brain size roughly correlates with the size of the body. Big animals generally have larger brains than small animals, but that doesn't mean they're proportionally more intelligent. An elephant's brain is much larger than a human's brain, but they're obviously much less intelligent than humans.
Of course, we know very little about the details of how our brains work (and I'm not an expert), but it seems clear that brain size or neuron count are not as important as how neurons are organized and crosslinked. The human brain has a huge number of neurons (somewhere on the order of one hundred billion), and each individual neuron is connected to several thousand other neurons (leading to a total number of connections in the hundreds of trillions). Technically, neurons are "digital" in that if you were to take a snapshot of the brain at a given instant, each neuron would be either "on" or "off" (i.e. a 1 or a 0). However, neurons don't work like digital electronics. When a neuron fires, it doesn't just turn on, it pulses. What's more, each neuron is accepting input from and providing output to thousands of other neurons. Each connection has a different priority or weight, so that some connections are more powerful or influential than others. Again, these connections and their relative influence tends to be in flux, constantly changing to meet new needs.
This turns out to be a good thing in that it gives us the capability to be creative and solve problems, to be unpredictable - things humans cherish and that computers can't really do on their own.
However, this all comes with its own set of tradeoffs. With respect to this post, the most relevant of which is that humans aren't particularly good at doing context switches. Our brains are actually great at processing a lot of information in parallel. Much of it is subconscious - heart pumping, breathing, processing sensory input, etc... Those are also things that we never really cease doing (while we're alive, at least), so those resources are pretty much always in use. But because of the way our neurons are interconnected, sometimes those resources trigger other processing. For instance, if you see something familiar, that sensory input might trigger memories of childhood (or whatever).
In a computer, everything is happening in serial and thus it is easy to predict how various inputs will impact the system. What's more, when a computer stores its CPU's current state in memory, that state can be restored later with perfect accuracy. Because of the interconnected and parallel nature of the brain, doing this sort of context switching is much more difficult. Again, we know very little about how the humain brain really works, but it seems clear that there is short-term and long-term memory, and that the process of transferring data from short-term memory to long-term memory is lossy. A big part of what the brain does seems to be filtering data, determining what is important and what is not. For instance, studies have shown that people who do well on memory tests don't necessarily have a more effective memory system, they're just better at ignoring unimportant things. In any case, human memory is infamously unreliable, so doing a context switch introduces a lot of thrash in what you were originally doing because you will have to do a lot of duplicate work to get yourself back to your original state (something a computer has a much easier time doing). When you're working on something specific, you're dedicating a significant portion of your conscious brainpower towards that task. In otherwords, you're probably engaging millions if not billions of neurons in the task. When you consider that each of these is interconnected and working in parallel, you start to get an idea of how complex it would be to reconfigure the whole thing for a new task. In a computer, you need to ensure the current state of a single CPU is saved. Your brain, on the other hand, has a much tougher job, and its memory isn't quite as reliable as a computer's memory. I like to refer to this as metal inertia. This sort of issue manifests itself in many different ways.
One thing I've found is that it can be very difficult to get started on a project, but once I get going, it becomes much easier to remain focused and get a lot accomplished. But getting started can be a problem for me, and finding a few uninterrupted hours to delve into something can be difficult as well. One of my favorite essays on the subject was written by Joel Spolsky - its called Fire and Motion. A quick excerpt:
Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I've got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don't realize that it's already 7:30 pm.I've found this sort of mental inertia to be quite common, and it turns out that there are several areas of study based around this concept. The state of thought where your brain is up to speed and humming along is often referred to as "flow" or being "in the zone." This is particularly important for working on things that require a lot of concentration and attention, such as computer programming or complex writing.
From my own personal experience a couple of years ago during a particularly demanding project, I found that my most productive hours were actually after 6 pm. Why? Because there were no interruptions or distractions, and a two hour chunk of uninterrupted time allowed me to get a lot of work done. Anecdotal evidence suggests that others have had similar experiences. Many people come into work very early in the hopes that they will be able to get more done because no one else is here (and complain when people are here that early). Indeed, a lot of productivity suggestions basically amount to carving out a large chunk of time and finding a quiet place to do your work.
A key component of flow is finding a large, uninterrupted chunk of time in which to work. It's also something that can be difficult to do here at a lot of workplaces. Mine is a 24/7 company, and the nature of our business requires frequent interruptions and thus many of us are in a near constant state of context switching. Between phone calls, emails, and instant messaging, we're sure to be interrupted many times an hour if we're constantly keeping up with them. What's more, some of those interruptions will be high priority and require immediate attention. Plus, many of us have large amounts of meetings on our calendars which only makes it more difficult to concentrate on something important.
Tell me if this sounds familiar: You wake up early and during your morning routine, you plan out what you need to get done at work today. Let's say you figure you can get 4 tasks done during the day. Then you arrive at work to find 3 voice messages and around a hundred emails and by the end of the day, you've accomplished about 15 tasks, none of which are the 4 you had originally planned to do. I think this happens more often than we care to admit.
Another example, if it's 2:40 pm and I know I have a meeting at 3 pm - should I start working on a task I know will take me 3 solid hours or so to complete? Probably not. I might be able to get started and make some progress, but as soon my brain starts firing on all cylinders, I'll have to stop working and head to the meeting. Even if I did get something accomplished during those 20 minutes, chances are when I get back to my desk to get started again, I'm going to have to refamiliarize myself with the project and what I had already done before proceeding.
Of course, none of what I'm saying here is especially new, but in today's world it can be useful to remind ourselves that we don't need to always be connected or constantly monitoring emails, RSS, facebook, twitter, etc... Those things are excellent ways to keep in touch with friends or stay on top of a given topic, but they tend to split attention in many different directions. It's funny, when you look at a lot of attempts to increase productivity, efforts tend to focus on managing time. While important, we might also want to spend some time figuring out how we manage our attention (and the things that interrupt it).
(Note: As long and ponderous as this post is, it's actually part of a larger series of posts I have planned. Some parts of the series will not be posted here, as they will be tailored towards the specifics of my workplace, but in the interest of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Also, given the nature of this post, it makes sense to pursue interests in my personal life that could be repurposed in my professional life (and vice/versa).)
Posted by Mark on June 28, 2009 at 03:44 PM .: link :.
Wednesday, February 04, 2009
I've always considered myself something of a nerd, even back when being nerdy wasn't cool. Nowadays, everyone thinks they're a nerd. MGK recently noticed this:
Recently, I was surfing the net looking for lols, and came across a personal ad on Craigslist. The ad was not in and of itself hilarious, but one thing struck me. The writer described herself as “nerdy,” and as an example of her nerdiness, explained that she loved to watch Desperate Housewives.To address this situation, he has devised "a handy guide for people to define their own nerdiness, based on a number of nerdistic passions." I'm a little surprised at how poorly I did in some of these categories.
Posted by Mark on February 04, 2009 at 10:45 PM .: link :.
Sunday, January 04, 2009
The PS3, Revisiting Predictions & Other Odds & Ends
The PS3 came yesterday, so I've spent most of the time since then in a Blu-Ray and Video Game induced haze. I was lured out by my brother this afternoon to watch the Eagles playoff game (we won!) and maybe feed myself too. While I'm out, I figure I should at least make some pretense at updating the blog with something...
Posted by Mark on January 04, 2009 at 08:33 PM .: link :.
Wednesday, December 17, 2008
12DC: Day 4 - Eggnog
A family tradition has grown over the past few years. Every Thanksgiving, we have an Eggnog tasting. Nothing fancy or scientific (though perhaps that can be arranged next year!) and we're pretty bad about organizing this. Point of fact, this year, we only had 4 varieties to try out. Last year, however, was a different story. Again, due to poor planning, several people brought several different varieties, which lead us to have 14 different brands of eggnog.
For reference, these are the eggnogs pictured:
Up until this event started, I'd never been much of a fan of eggnog. There's just something unappealing about a substance that is so scary-bad-for-you that you can only consume it for a limited period of the year. But I've grown into it and am looking forward to next year's tasting...
Posted by Mark on December 17, 2008 at 07:05 PM .: link :.
Tuesday, December 16, 2008
12DC: Day 3 - The Christmas Cactus
What do you use, a tree? Pfft!
The traditional Kaedrin Christmas cactus strikes again. Also striking again, my poor photography skillz! More to come...
Posted by Mark on December 16, 2008 at 06:45 PM .: link :.
Wednesday, November 26, 2008
Geekout: Alien vs. Predator
A while ago, I ran accross this McSweeney's article that pit Alien vs. Predator in a series of unlikely events like Macramé and Lincoln-Douglas Debating. Long time readers will know that I am a fan of the Alien vs. Predator concept, though the recent films have been awful (Alien, Aliens, and Predator are some of my favorites movies though, and the original AvP comic book was fantastic). In any case, I couldn't resist discussing and debating some of the events listed out, and the result was a pretty amusing (and incredibly geeky) conversation.
The first event under question was Breakdancing. I had picked the Alien for this and thought it was the obvious choice. My friend Roy disagreed, noting:
I think you've failed to take into account the unique physiology of the alien. Those tubes on his back? The tail? Those are going to make dancing very difficult. No backspins for him. I think that the Predator's upper body strength will help him to pull of some awesome moves. And, he doesn't have big pipes or tubes coming up out of his back.I have to admit that he had a point about the tubes on the Alien's back, but I still felt the Alien was the superior breakdancer. My response:
Point taken, but I still see the Alien having much more agility, thus giving them the ability to move more gracefully than the Predator while break dancing. While their backspins might be problematic, they do have that giant head which would enable them to perform some rather spectacular headstands and headspins. And while the tail could get in the way of a back-spin, it would also give them a valuable 5th pivot with which they could pull off all sorts of crazy moves. Back spins are an important part of break dancing, but there are no shortages of upper body, frontal, side, or sliding moves, and indeed, there seem to be more of those than back maneuvers. When you add in the Alien's unique physiology, you get something that would allow for all sorts of variations and indeed, even totally new moves. Really, I think the Alien would revolutionize the break dancing scene. The predator's upper-body streght would allow for some amazing handstand style moves, but in almost every other way they are less limber and agile than the alien or even most human break-dance experts. Indeed, the alien does not seem to have an absense of upper body strength, so it's not like that gives the Predator a decisive advantage (the way the alien's tail does). I suppose it's possible that not all Predators are as bulked up as the ones in the films, but there is no real evidence of that.Personally, I still believe I'm right on that one. The next event that came into question was Competitive Hot-Dog Eating. My initial pick was Predator, mostly because of his larger mouth and mandibles (when you look closely, the Alien's mouth is actually quite small). Anyway, Roy had some comments about this pick as well:
Totally goes to alien. Aliens are always hungry. They do nothing but eat and kill. We don't even actually know that Predator's eat meat. They're probably a bunch of annoying vegans. ;POnce again, I think Roy makes a fair point here, but it's ultimately unpersuasive. My response:
This makes more sense to me, though I do maintain that the Alien's multi-tiered mouth is still significantly smaller and thus represents a bottleneck during any sort of competitive eating contest. Yes, their activities are generally limited to eating, killing, building those crazy hives and reproducing, but I see that as just a further example of why they would not be good at competitive eating. Since that's all they do, they do not have to eat fast. It's hard to tell because the alien and it's motivations are so... alien... and unexplored. The Predators, on the other hand, clearly have some sort of civilization with technological capabilities well beyond our own. It stands to reason that they would have less time dedicated to eating, and thus would need to scarf down more in less time... which means they would be better suited towards competitive eating. Your point about vegan Predators is also taken, but what we know of their culture is that it is based primarily on hunting. While I'm sure there are vegan Predators, I think it's fair to speculate that a race of hunters values and prizes meat.I thought that was pretty good, but someone else stepped in at this point to defend Roy, noting that:
We know they hunt, yes, but in the hunts we've seen they take trophies, not food. I have yet to see a predator field-dress an alien. I mean, hell, how much meat could be on something like that anyway? It's all chitin and sinew, not really a meal at all, and that's before we think about the effects upon the stomach lining of that acid blood (ulcers like you wouldn't believe!!). No, it's not fair to speculate on their eating habits by looking at their hunts. Their hunts are trophy kills, rites of passage, not a means for survival. Everything we've seen of their society, we haven't been given clue one about their eating habits.This is certainly an interesting take on the matter. My response:
Interesting point, but I think it's reasonable to make some extrapolations based on their hunting culture. It's reasonable to assume that their hunts as portrayed in the movies are indeed trophy hunts and not a matter of survival or food. This makes sense on an additional level because they're hunting alien species and alien physiology may not react well with their digestive systems (as you mention, the alien would be particularly bad in that respect). However, it's also reasonable to assume that the reason for their hunting tradition is that they were required to do so in the evolution of their species. Yes, I'm extrapolating from human experiences here, but there are humans today who hunt purely for trophies. It's reasonable to assume that the reason the Predator race is so focused on hunting is that they were forced to do so on their home planet. Indeed, in such a case, the act of hunting could take on a more meaningful aspect because of symbolic or perhaps even spiritual reasons. The act of hunting clearly goes beyond survival for them, but it's reasonable to assume that it began as a simple survival technique on their home planet, and grew into a more meaningful practice as the race became more advanced.This thread went on for a few more posts and ultimately resulted in a stalemate, as we really don't know enough about either culture to say for sure. I still think it's reasonable to say that the hunting culture of the Predators implies a history of hunting and meat-eating.
The next topic under debate was the Wet T-Shirt Contest, which I had originally given a tie. After all, for the most part, we see both the Alien and the Predator without their shirts on, so what's the point of a Wet T-Shirt Contest? However, someone interjected a brilliant point that totally convinced me that I was wrong; the Alien would undoubtedly win this event.
Wet T-shirt: Alien. Preddy has been noted on several occasions to be "one ugly motherfucker."There is simply no arguing with that one.
Posted by Mark on November 26, 2008 at 11:32 PM .: link :.
Wednesday, September 24, 2008
A few years ago, The Onion put out a book called Our Dumb Century. It was comprised of a series of newspaper front pages, one from each year. It was an interesting book, in part because of the events they chose to represent each year and also because The Onion writers are hilarious. The most brilliant entry in the book was from the 1969 edition of the paper:
Utterly brilliant. You can't read it on that small copy, but there's a whole profanity-laden exchange between Houston and Tranquility Base that's also hysterically funny. As it turns out, The Onion folks went ahead and made a video, complete with archival footage and authentic sounding voices, beeps, static, etc... Incredibly funny. [video via Need Coffee]
Update: Weird, I tried to embed the video in this post, but when you click play it says it's no longer available... but if you go directly to youtube, you can get the video. I'm taking out the embedded video and putting in the link for now.
Posted by Mark on September 24, 2008 at 10:04 PM .: link :.
Wednesday, August 06, 2008
Keeper Leagues and Unexpected Consequences
It's not a secret that I'm a pretty geeky guy, especially when it comes to certain subjects (movies, SF, etc...). My friends are a different kind of geek though. They're sports geeks. Specifically, they love baseball. About 10 years ago, they started a fantasy baseball league. At the time, the various websites weren't that great, but as the years passed, things started to get more sophisticated... and the league became much more competitive. In true geek fashion, we started getting carried away with various aspects of the league. Every team owner is expected to issue faux-press releases (i.e. pretending to be the Associated Press and faux interviews, etc...) and the league wrote a Constitution. In its current incarnation, the Constitution is 11 pages long. Every year, owners propose amendments in accordance with Article VI of the Constitution, and if 2/3 of the league approves of the amendment, it is ratified and put in the Constitution.
A few years ago, we ratified an amendment that gave each owner "keeper rights." What this basically means is that you can keep three eligible members of your team for the next season. Here's an excerpt from Article IV of the MLF Constitution:
Article IV: Keeper RightsThe rules of keeper eligibility help keep things a little even, meaning that a team that wins the league one year won't necessarily have as big an advantage as anyone else in the next year. You can't keep a player indefinitely and since players drafted in the first three rounds are also ineligible, that ensures that the best players are still open to even the worst team in the following year's draft. And Article IV, section 3 featues an interesting twist: "Trading keeper rights is permitted."
Now, these rules were put into place for many reasons. Some people like the opportunity to take a chance on a young, developing player (in the hopes that they'll be able to keep them for a breakout year in the following season). Some people want to make sure the team has a solid core that can be built upon. And a host of other reasons. However, after three years of keeper rights, some unexpected consequences have presented themselves.
The biggest implication is that team owners who are not doing well will "sell" their keeper ineligible players for more keeper rights and keeper eligible players. Similarly, those who are doing well will "sell" their keeper rights in the hopes of strengthening their team for the playoffs. The reason I'm using scare quotes around the word "sell" is that what this really amounts to are fire sales. Top tier players will often be traded for near scraps because a team that has no hope of winning the league has no use for that top tier player, but they could use a keeper right to help build for the future.
Initially, there was a bit of a learning curve. How much value does a keeper right really have? In the first season, someone traded 3 keeper rights for Albert Pujols, a trade so lopsided that a new constitutional amendment was ratified (titled The Golden Shaft award, it is given to the player who made the worst trade of the season.) However, after a few years, things have changed. Keeper rights have become more valuable, and teams in contention will "mortgage their future" by trading keeper rights for players (this effectively means they can add top tier talent without losing anything that impacts them for the current season). Some people value keeper rights much more than others, and during this season's trade deadline, things got ridiculous.
During the last day before the trade deadline, there were 8 trades involving 36 players and 7 keeps. This is rather obscene. One owner traded his 3 keeps for 8 players (many top tier folks) and made another trade for 5 additional players. In effect, this person replaced most of his team in one day and became an instant league powerhouse (and he is my division rival as well!) Needless to say, this year's "Winter Meetings" will contain much discussion regarding how we can mitigate these fire sales. There are several options available to us:
Posted by Mark on August 06, 2008 at 09:09 PM .: link :.
Wednesday, July 30, 2008
Predictions and Information Overload
I'm currently reading Arthur C. Clarke's novel, Childhood's End, and I found this passage funny:
...there are too many distractions and entertainments. Do you realize that every day something like five hundred hours of radio and TV pour out over the various channels? If you went without sleep and did nothing else, you could follow less than a twentieth of the entertainment that’s available at the turn of a switch! No wonder people are becoming passive sponges — absorbing but never creating. Did you know that the average viewing time per person is now three hours a day? Soon people won’t be living their own lives any more. It will be a full-time job keeping up with the various family serials on TV!I don't think Clarke was really attempting to make a firm prediction in this statement (which is essentially made in passing), but it's amusing to think how much he got right and how much he got wrong. Considering that he was writing this book in the early 1950s, he actually did make a pretty decent prediction when it came to average viewing time per person. In the US, the number is more like 4-5 hours a day (I'm betting that this will be in decline, especially in this year of the WGA strike), but worldwide, it's probably down around 3 hours a day. On the other hand, Clarke drastically underestimated the amount of content made available and also the effect of so much content.
The United States alone has 2,218 stations, which is over 4 times as many stations as Clarke had predicted hours. If we assume each station only broadcasts for an average of 16 hours a day, that works out to be over 35,000 hours of programming (70 times as much as Clarke had predicted for both TV and radio). And this doesn't even count things like On Demand, DVDs, and newer entertainment mediums like the Internet (which includes stuff like You Tube and Podcasts,etc... in addition to the standard textual data) and Video Games.
Which brings me to the other interesting thing about Clarke's prediction. He seemed to think that when that much entertainment became readily available, we would become "passive sponges — absorbing but never creating." But in today's world, the opposite seems true. Indeed, content creation seems to be accelerating. To be sure, Clarke was right in the general sense that massive amounts of data do indeed come with problems of their own. Clarke is certainly right to note that you can only really experience a tiny fraction of what's out there at any given time, and this can be an issue. Ironically, a google search for "Information Overload" yields 2,150,000 results, which is as good an example as any. On a personal level, I don't think this goes as far as, say, Nicholas Carr seems to think, and as long as we find ways around the mammoth amounts of data we're all expected to assimilate on a daily basis (stuff like self-censorship seems to help), we should be fine.
Posted by Mark on July 30, 2008 at 07:06 PM .: link :.
Wednesday, April 02, 2008
Via Haibane.info, I stumbled across this:
It's pretty funny and I got a little curious about the history of this thing. Apparently a sketch comedy troupe in Wisconsin called the Dead Alewives put together an album featuring a parody of Dungeons & Dragons. The audio skit is pretty funny by itself, and it's been making the rounds on radio and the internet ever since the mid 1990s. In 2000, a bunch of developers at a video game company, Volition (they made Descent, Red Faction, and of course, Summoner), made an animated version, and distrubuted it along with their games (it's in some promotional material and if you win the game, you see it there as well). So it went from an improvisational comedy group, to a CD they made, to the radio, to the internet, got mashed up with visuals from other video games, and has now finally made its way to me (about 12 years later).
Posted by Mark on April 02, 2008 at 10:42 PM .: link :.
Sunday, March 23, 2008
I recently finished watching both seasons of Dexter. The series has a fascinating premise: the titular hero, Dexter Morgan, is a forensic analyst (he's a "blood spatter expert") for the Miami police by day, but a serial killer by night. He operates by a "code," only murdering other murderers (usually ones who've beaten the system). The most interesting thing about Dexter's code is the implication that he does not follow the code out of some sort of dedication to morality or justice. He knows what he does is evil, but he follows his code because it's the most constructive way to channel his aggression. Of course, the code is not perfect, and a big part of the series is how the code shapes him and how he, in turn, shapes it. To be honest, watching the series is a little odd and disturbing when you realize that you're essentially rooting for a serial killer (an affable and charming one, to be sure, but that's part of why it's disturbing). I started to think about this a bit, and several other examples of similar characters came to mind. There's a lot more to the series, but I don't want to ruin it with a spoiler-laden discussion here. Instead, I want to talk about vigilantes.
Despite the lack of concern for justice (or perhaps because of that), Dexter is essentially a vigilante... someone who takes the law into his own hands. There is, of course, a long history of vigilantism, in both real life and art. Indeed, many classic instances happened long before the word vigilante was coined - for example, Robin Hood. He stole from the rich to give to the poor, and was immortalized as a folk hero whose tales are still told to this day. I think there is a certain cultural fascination with vigilantes, especially vigilantes in art.
Take superheroes, most of whom are technically vigilantes. Sure, many stand for all that is good in the world and often cite truth and justice as motivation, but the evolution of comic books shows something interesting. I haven't read a whole lot of comic books (especially of the superhero kind), but the impression I get is that when the craze started in the 1930s, it was all about heroics and people serving the common good. There was also a darker edge to some of them, and that edge has grown as time progressed. Batman is probably the most relevant to this discussion, as he shares a complicated relationship with the police and a certain above-the-law attitude towards solving crimes. Interestingly, the Batman of the 1930s was probably a darker, more violent superhero than he was in the 1940s, when one editor issued a decree that the character could no longer kill or use a gun. As such, the postwar Batman became more of an upstanding citizen, and the stories took on a lighter tone (definitely an understandable direction, considering what the world had been through). I'm sure I'm butchering the Batman chronology here, but the next sigificant touchstone for Batman came in 1986, with the publication of Batman: The Dark Knight Returns. Written and drawn by Frank Miller, the series reintroduced Batman as a dark, brooding character with complex psychological issues. A huge success, this series ushered in a new era of "grim and gritty" superheros that still holds today.
In general, our superheroes have become much more conflicted. Many (like Batman) tackle the vigilante aspect head on, and if you look at something like Watchmen (or The Incredibles, if you want a lighter version), you can see a shift in the way such stories are told. I'm sure there are literally hundreds of other examples in the comic book world, but I want to shift gears for a moment and examine another cultural icon that Dexter reminded me of: Dirty Harry.
Inspector Harry Callahan is an incredibly popular character, but apparently not with critics:
Critics have rarely cracked the whip harder than on the Dirty Harry film series, which follows the exploits of a trigger-happy San Francisco cop named Harry Callahan and his junior partners, usually not long for this world. On its release in 1971, Dirty Harry was trounced as 'fascist medievalism' by the potentate of the haut monde critic set, Pauline Kael, as well as aspiring Kaels like young Roger Ebert. Especially irksome to the criterati was a key moment in the film when Inspector Callahan, on the trail of an elusive serial sniper, is reprimanded by his superiors for not taking into account the suspect's Miranda rights. Callahan replies, through clenched teeth, "Well, I'm all broken up about that man's rights." Take that, Miranda.I should say that critics often give the film (at least, the first one) generally good overall marks, praising its "suspense craftsmanship" or calling it "a very good example of the cops-and-killers genre." But I'm fascinated by all the talk of fascism. Despite working within the system, Dirty Harry indeed does take the law into his own hands, and in doing so he ignores many of our treasured Constitutional freedoms. And yet we all cheer him on, just as we cheer Batman and Dexter.
Why are these characters so popular? Why do we cheer such characters on even when we know what they're doing is ultimately wrong? I think it comes down to desire. We all desire justice. We want to see wrongs being made right, yet every day we can turn on the TV and watch non-stop failures of our system, whether it be rampant crime or a criminal going free or any other number of indignities. Now, I'm not an expert, but I don't think our society today is much worse off than it was, say, a hundred years ago (In fact, I think we're significantly better off, but that's another discussion). The big difference is that information is disseminated more widely and quickly, and dramatic failures of the system are attention grabbing, so that's what we get. What's more, these stories tend to focus on the most dramatic, most obscene examples. It's natural for people to feel helpless in the face of such news, and I think that's why everyone tends to embrace vigilante stories (note that people don't generally embrace actual real-life vigilantes - that's important, and we'll get to that later). Such stories serve many purposes. They allow us to cope with life's tragedies, internalize them and in some way comfort us, but as a deeper message, they also emphasize that the world is not perfect, and that we'll probably never solve the problem of crime. In some ways, they act as a critique of our system, pointing out it's imperfections and thereby making sure we don't become complacent in the ever-changing fight against crime.
Of course, there is a danger to this way of thinking, which is why critics like Pauline Kael get all huffy when they watch something like Dirty Harry. We don't want to live in a police state, and to be honest, a real cop who acted like Dirty Harry would probably be an awful cop. Films like that deal in extremes because they're trying to make a point, and it's easy to misinterpret such films. I doubt people would really accept a cop like Dirty Harry. Sure, some folks might applaud his handling of the Scorpio case that the film documents (audiences certainly did!), but police officers don't handle a single case in the course of their career, and most cases aren't that black and white either. Dirty Harry would probably be fired out here in the real world. Ultimately, while we revel in such entertainment, we don't actually want real life to imitate art in this case. However, that doesn't mean we enjoy hearing about a vicious drug dealer going free because the rules of evidence were not followed to the letter. I think deep down, people understand that concepts like the rules of evidence are important, but they can also be extremely frustrating. This is why we have conflicting emotions when we watch the last scene in Dirty Harry, in which he takes off his police badge and throws it into the river.
I think this is a large part of why vigilante stories have evolved. Comic book heroes like Batman have become more conflicted, and newer comic books often deal with the repercussions of vigilatism. The Dirty Harry sequel, Magnum Force, was apparently made as a direct answer to the critics of Dirty Harry who thought that film was openly advocating law-sanctioned vigilantism. In Magnum Force, the villains are vigilante cops. Then you have modern day vigilantes like Dexter, which pumps audiences full of conflicting emotions. I like this guy, but he's a serial killer. He's stopping other killers, but he's doing so in such a disturbing way.
Are vigilante stories fascist fantasies? Perhaps, but fantasies aren't real. They're used to illustrate something, and in the case of vigilante fantasies, they illustrate a desire for justice. The existence of a show like Dexter will repulse some people and that's certainly an understandable reaction. In fact, I think that's exactly what the show's creators want to do. They're walking the line between satisfying the desire for justice while continually noting that Dexter is not a good person. Ironically, what would repulse me more would be the complete absence of stories like Dexter, because the only way such a thing could happen would be if everyone thought our society was perfect. Perhaps someday concepts like justice and crime will be irrelevant, but that day ain't coming soon, and until it does, we'll need such stories, if only to remind us that we don't live in a perfect world.
Posted by Mark on March 23, 2008 at 07:16 PM .: link :.
Sunday, December 23, 2007
The Two Days of Christmas
I suppose I could have done a 12 days of Christmas post in the vein of the 4 weeks of Halloween posts, but there's obviously no time left. So here are a few things I've watched, read, or listened to recently in preparation for Christmas.
Posted by Mark on December 23, 2007 at 09:25 PM .: link :.
Wednesday, December 05, 2007
Every so often, I see someone who is genuinely concerned with reaching the unreachable. Whether it be scientists who argue about how to frame their arguments, alpha-geek programmers who try to figure out how to reach typical, average programmers, or critics who try to open a dialogue with feminists. Debates tend to polarize, and when it comes to politics or religion, assumptions of bad faith on both sides tend to derail discussions pretty quickly.
How do you reach the unreachable? Naturally, the topic is much larger than a single blog entry, but I did run accross an interesting post by Jon Udell that outlines Charles Darwin's rhetorical strategy in the book, On the Origin of Species (which popularized the theory of evolution).
Darwin, says Slatkin, was like a salesman who finds lots of little ways to get you to say yes before you're asked to utter the big yes. In this case, Darwin invited people to affirm things they already knew, about a topic much more familiar in their era than in ours: domestic species. Did people observe variation in domestic species? Yes. And as Darwin piles on the examples, the reader says, yes, yes, OK, I get it, of course I see that some pigeons have longer tail feathers. Did people observe inheritance? Yes. And again, as he piles on the examples, the reader says yes, yes, OK, I get it, everyone knows that that the offspring of longer-tail-feather pigeons have longer tail feathers.I think Udell simplifies the inception and development of the idea of evolution, but I think the point generally holds. Darwin's ideas didn't come into mainstream prominence until he published his book, decades after he had begun his work. Obviously, Darwin's strategy isn't applicable in every situation, but it is an interesting place to start (I suppose we should keep in mind that evolution is still controversial amongst the mainstream)...
Posted by Mark on December 05, 2007 at 08:29 PM .: link :.
Wednesday, November 28, 2007
Facial Expressions and the Closed Eye Syndrome
I've been reading Malcolm Gladwell's book, Blink, and one of the chapters focuses on the psychology of facial expressions. Put simply, we wear our emotions on our face, and some enterprising psychologists took to mapping the distinct muscular movements that the human face can make. It's an interesting process, and it turns out that people who learn these facial expressions (of which there are many) are eerily good at recognizing what people are really thinking, even if they aren't trying to show it. It's almost like mind reading, and we all do it to some extent or another (mostly, we do it unconsciously). Body language and facial expressions are packed with information, and we'd all be pretty much lost without that kind of feedback (perhaps why misunderstandings are more common on the phone or in email). Most of the time, our expressions are voluntary, but sometimes they're not. Even if we're trying to suppress our expressions, a fleeting look may cross our faces. Often, these "micro-expressions" last only a few milliseconds and are imperceptible, but when trained psychologists watch video of, say, Harold "Kim" Philby (a notorious soviet spy) giving a press conference, they're able to read him like a book (slow motion helps).
I found this example interesting, and it highlights some of the subtle differences that can exist between expressions (in this case, between a voluntary and involuntary expression):
If I were to ask you to smile, you would flex your zygomatic major. By contrast, if you were to smile spontaneously, in the presence of genuine emotion, you would not only flex your zygomatic but also tighten the orbicularis oculi, pars orbitalis, which is the muscle that encircles the eye. It is almost impossible to tighten the orbicularis oculi, pars orbitalis on demand, and it is equally difficult to stop it from tightening when we smile at something genuinely pleasurable.I found that interesting in light of the Closed Eye Syndrome I noticed in Anime. I wonder how that affects the way we perceive Anime. If a smiling mouth by itself means a fake expression of happiness while a smiling mouth and closed eyes means genuine emotion, does that make the animation more authentic? Animation obviously doesn't have the fidelity of video or film, but we can obviously read expressions from animated faces, so I would expect that closed eye syndrome exists more because of accuracy than anything else. In my original post on the subject, Roy noted that the reason I noticed closed eyes in anime could have something to do with the way Japan and the US read emotion. He pointed to an article that claimed Americans focus more on the mouth while the Japanese focus more on the eyes when trying to read emotions from facial expressions. One example from the article was emoticons. For happiness, Americans use a smily face :) while the Japanese tend to use ^_^ (which seems to be a face with eyes closed). That might still be part of it, but ever since I made the observation, I've noticed similar expressions in American animation (I just recently noticed it a lot in a Venture Bros. episode). Still, occurrences in American animation seem less frequent (or perhaps less obvious), so perhaps the observation still holds.
Gladwell's book is interesting, as expected, though I'm not sure yet if he has a point other than to observe that we do a lot of subconscious analysis and make lots of split decisions, and sometimes this is good (other times it's not). Still, he's good at finding examples and drilling down into the issue, and even if I'm not sure about his conclusions, it's always fun to read. There's lots more on this subject in the book (for instance, he goes over how facial expressions and our emotions are a two way phenomenon - meaning that if you intentionally contort your face in an specific way, you can induce certain emotions. The psychologists I mentioned earlier who were mapping expressions noticed that after a full day of trying to manipulate their facial muscles to show anger (even though they weren't angry) they felt horrible. Some tests have been done to confirm that, indeed, our facial expressions are linked directly to our brain) and it's probably worth a read if that's your bag.
Posted by Mark on November 28, 2007 at 08:19 PM .: link :.
Sunday, November 18, 2007
The Paradise of Choice?
A while ago, I wrote a post about the Paradox of Choice based on a talk by Barry Schwartz, the author of a book by the same name. The basic argument Schwartz makes is that choice is a double-edged sword. Choice is a good thing, but too much choice can have negative consequences, usually in the form of some kind of paralysis (where there are so many choices that you simply avoid the decision) and consumer remorse (elevated expectations, anticipated regret, etc...). The observations made by Schwartz struck me as being quite astute, and I've been keenly aware of situations where I find myself confronted with a paradox of choice ever since. Indeed, just knowing and recognizing these situations seems to help deal with the negative aspects of having too many choices available.
This past summer, I read Chris Anderson's book, The Long Tail, and I was a little pleasantly surprised to see a chapter in his book titled "The Paradise of Choice." In that chapter, Anderson explicitely addresses Schwartz's book. However, while I liked Anderson's book and generally agreed with his basic points, I think his dismissal of the Paradox of Choice is off target. Part of the problem, I think, is that Anderson is much more concerned with the choices rather than the consequences of those choices (which is what Schwartz focuses on). It's a little difficult to tell though, as Anderson only dedicates 7 pages or so to the topic. As such, his arguments don't really eviscerate Schwartz's work. There are some good points though, so let's take a closer look.
Anderson starts with a summary of Schwartz's main concepts, and points to some of Schwartz's conclusions (from page 171 in my edition):
As the number of choices keeps growing, negative aspects of having a multitude of options begin to appear. As the number of choices grows further, the negatives escalate until we become overloaded. At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize.Now, the way Anderson presents this is a bit out of context, but we'll get to that in a moment. Anderson continues and then responds to some of these points (again, page 171):
As an antidote to this poison of our modern age, Schwartz recommends that consumers "satisfice," in the jargon of social science, not "maximize". In other words, they'd be happier if they just settled for what was in front of them rather than obsessing over whether something else might be even better. ...Anderson has completely missed the point here. Later in the chapter, he spends a lot of time establishing that people do, in fact, like choice. And he's right. My problem is twofold: First, Schwartz never denies that choice is a good thing, and second, he never advocates removing choice in the first place. Yes, people love choice, the more the better. However, Schwartz found that even though people preferred more options, they weren't necessarily happier because of it. That's why it's called the paradox of choice - people obviously prefer something that ends up having negative consequences. Schwartz's book isn't some sort of crusade against choice. Indeed, it's more of a guide for how to cope with being given too many choices. Take "satisficing." As Tom Slee notes in a critique of this chapter, Anderson misstates Schwartz's definition of the term. He makes it seem like satisficing is settling for something you might not want, but Schwartz's definition is much different:
To satisfice is to settle for something that is good enough and not worry about the possibility that there might be something better. A satisficer has criteria and standards. She searches until she finds an item that meets those standards, and at that point, she stops.Settling for something that is good enough to meet your needs is quite different than just settling for what's in front of you. Again, I'm not sure Anderson is really arguing against Schwartz. Indeed, Anderson even acknowledges part of the problem, though he again misstate's Schwartz's arguments:
Vast choice is not always an unalloyed good, of course. It too often forces us to ask, "Well, what do I want?" and introspection doesn't come naturally to all. But the solution is not to limit choice, but to order it so it isn't oppressive.Personally, I don't think the problem is that introspection doesn't come naturally to some people (though that could be part of it), it's more that some people just don't give a crap about certain things and don't want to spend time figuring it out. In Schwartz's talk, he gave an example about going to the Gap to buy a pair of jeans. Of course, the Gap offers a wide variety of jeans (as of right now: Standard Fit, Loose Fit, Boot Fit, Easy Fit, Morrison Slim Fit, Low Rise Fit, Toland Fit, Hayes Fit, Relaxed Fit, Baggy Fit, Carpenter Fit). The clerk asked him what he wanted, and he said "I just want a pair of jeans!"
The second part of Anderson's statement is interesting though. Aside from again misstating Schwartz's argument (he does not advocate limiting choice!), the observation that the way a choice is presented is important is interesting. Yes, the Gap has a wide variety of jean styles, but look at their website again. At the top of the page is a little guide to what each of the styles means. For the most part, it's helpful, and I think that's what Anderson is getting at. Too much choice can be oppressive, but if you have the right guide, you can get the best of both worlds. The only problem is that finding the right guide is not as easy as it sounds. The jean style guide at Gap is neat and helpful, but you do have to click through a bunch of stuff and read it. This is easier than going to a store and trying all the varieties on, but it's still a pain for someone who just wants a pair of jeans dammit.
Anderson spends some time fleshing out these guides to making choices, noting the differences between offline and online retailers:
In a bricks-and-mortar store, products sit on the shelf where they have been placed. If a consumer doesn't know what he or she wants, the only guide is whatever marketing material may be printed on the package, and the rough assumption that the product offered in the greatest volume is probably the most popular.I think it's a very good point he's making, though I think he's a bit too optimistic about how effective these guides to buying really are. For one thing, there are times when a choice isn't clear, even if you do have a guide. Also, while I think retailers that offer Recommendations based on what other customer purchases are important and helpful, who among us hasn't seen absurd recommendations? From my personal experience, a lot of people don't like the connotations of recommendations either (how do they know so much about me? etc...). Personally, I really like recommendations, but I'm a geek and I like to figure out why they're offering me what they are (Amazon actually tells you why something is recommended, which is really neat). In any case, from my own personal anecdotal observations, no one puts much faith in probablistic systems like recommendations or ratings (for a number of reasons, such as cheating or distrust). There's nothing wrong with that, and that's part of why such systems are effective. Ironically, acknowledging their imperfections allow users to better utilize the systems. Anderson knows this, but I think he's still a bit too optimistic about our tools for traversing the long tail. Personally, I think they need a lot of work.
When I was younger, one of the big problems in computing was storage. Computers are the perfect data gatering tool, but you need somewhere to store all that data. In the 1980s and early 1990s, computers and networks were significantly limited by hardware, particularly storage. By the late 1990s, Moore's law had eroded this deficiency significantly, and today, the problem of storage is largely solved. You can buy a terrabyte of storage for just a couple hundred dollars. However, as I'm fond of saying, we don't so much solve problems as trade one set of problems for another. Now that we have the ability to store all this information, how do we get at it in a meaninful way? When hardware was limited, analysis was easy enough. Now, though, you have so much data available that the simple analyses of the past don't cut it anymore. We're capturing all this new information, but are we really using it to its full potential?
I recently caught up with Malcolm Gladwell's article on the Enron collapse. The really crazy thing about Enron was that they didn't really hide what they were doing. They fully acknowledged and disclosed what they were doing... there was just so much complexity to their operations that no one really recognized the issues. They were "caught" because someone had the persistence to dig through all the public documentation that Enron had provided. Gladwell goes into a lot of detail, but here are a few excerpts:
Enron's downfall has been documented so extensively that it is easy to overlook how peculiar it was. Compare Enron, for instance, with Watergate, the prototypical scandal of the nineteen-seventies. To expose the White House coverup, Bob Woodward and Carl Bernstein used a source-Deep Throat-who had access to many secrets, and whose identity had to be concealed. He warned Woodward and Bernstein that their phones might be tapped. When Woodward wanted to meet with Deep Throat, he would move a flower pot with a red flag in it to the back of his apartment balcony. That evening, he would leave by the back stairs, take multiple taxis to make sure he wasn't being followed, and meet his source in an underground parking garage at 2 A.M. ...Again, there's a lot more detail in Gladwell's article. Just how complicated was the public documentation that Enron had released? Gladwell gives some examples, including this one:
Enron's S.P.E.s were, by any measure, evidence of extraordinary recklessness and incompetence. But you can't blame Enron for covering up the existence of its side deals. It didn't; it disclosed them. The argument against the company, then, is more accurately that it didn't tell its investors enough about its S.P.E.s. But what is enough? Enron had some three thousand S.P.E.s, and the paperwork for each one probably ran in excess of a thousand pages. It scarcely would have helped investors if Enron had made all three million pages public. What about an edited version of each deal? Steven Schwarcz, a professor at Duke Law School, recently examined a random sample of twenty S.P.E. disclosure statements from various corporations-that is, summaries of the deals put together for interested parties-and found that on average they ran to forty single-spaced pages. So a summary of Enron's S.P.E.s would have come to a hundred and twenty thousand single-spaced pages. What about a summary of all those summaries? That's what the bankruptcy examiner in the Enron case put together, and it took up a thousand pages. Well, then, what about a summary of the summary of the summaries? That's what the Powers Committee put together. The committee looked only at the "substance of the most significant transactions," and its accounting still ran to two hundred numbingly complicated pages and, as Schwarcz points out, that was "with the benefit of hindsight and with the assistance of some of the finest legal talent in the nation."Again, Gladwell's article has a lot of other details and is a fascinating read. What interested me the most, though, was the problem created by so much data. That much information is useless if you can't sift through it quickly or effectively enough. Bringing this back to the paradise of choice, the current systems we have for making such decisions are better than ever, but still require a lot of improvement. Anderson is mostly talking about simple consumer products, so none are really as complicated as the Enron case, but even then, there are still a lot of problems. If we're really going to overcome the paradox of choice, we need better information analysis tools to help guide us. That said, Anderson's general point still holds:
More choice really is better. But now we know that variety alone is not enough; we also need information about that variety and what other consumers before us have done with the same choices. ... The paradox of choice turned out to be more about the poverty of help in making that choice than a rejection of plenty. Order it wrong and choice is oppressive; order it right and it's liberating.Personally, while the help in making choices has improved, there's still a long way to go before we can really tackle the paradox of choice (though, again, just knowing about the paradox of choice seems to do wonders in coping with it).
As a side note, I wonder if the video game playing generations are better at dealing with too much choice - video games are all about decisions, so I wonder if folks who grew up working on their decision making apparatus are more comfortable with being deluged by choice.
Posted by Mark on November 18, 2007 at 09:47 PM .: link :.
Wednesday, October 17, 2007
The Spinning Silhouette
This Spinning Silhouette optical illusion is making the rounds on the internet this week, and it's being touted as a "right brain vs left brain test." The theory goes that if you see the silhouette spinning clockwise, you're right brained, and you're left brained if you see it spinning counterclockwise.
Everytime I looked at the damn thing, it was spinning a different direction. I closed my eyes and opened them again, and it spun a different direction. Every now and again, and it would stay the same direction twice in a row, but if I looked away and looked back, it changed direction. Now, if I focus my eyes on a point below the illusion, it doesn't seem to rotate all the way around at all, instead it seems like she's moving from one side to the other, then back (i.e. changing directions every time the one leg reaches the side of the screen - and the leg always seems to be in front of the silhouette).
Of course, this is the essense of the illusion. The silhouette isn't actually spinning at all, because it's two dimensional. However, since my brain is used to living in a three dimensional world (and thus parsing three dimensional images), it's assuming that the image is also three dimensional. We're actually making lots of assumptions about the image, and that's why we can see it going one way or the other.
Eventually, after looking at the image for a while and pondering the issues, I got curious. I downloaded the animated gif and opened it up in the GIMP to see how the frames are built. I could be wrong, but I'm pretty sure this thing is either broken or it's cheating. Well, I shouldn't say that. I noticed something off on one of the frames, and I'd be real curious to know how that affects people's perception of the illusion (to me, it means the image is definitely moving counterclockwise). I'm almost positive that it's too subtle to really affect anything, but I did find it interesting. More on this, including images and commentary, below the fold. First thing's first, here's the actual spinning silhouette.
Again, some of you will see it spinning in one direction, some in the other direction. Everyone seems to have a different trick for getting it to switch direction. Some say to focus on the shadow, some say to look at the ankles. Closing my eyes and reopening seems to do the trick for me. Now let's take a closer look at one of the frames. Here's frame 12:
Looking at this frame, you should be able to switch back and forth, seeing the leg behind the person or in front of the person. Again, because it's a silhouette and a two dimensional image, our brain usually makes an assumption of depth, putting the leg in front or behind the body. Switching back and forth on this static image was actually a lot easier for me. Now the tricky part comes in the next frame, number 13 (obviously, the arrow was added by me):
Now, if you look closely at the leg, you'll see a little imperfection in the silhouette. Maybe I'm wrong, but that little gash in the leg seems to imply that the leg is behind the body. If you try, you can still get yourself to see the image as having the leg in front, but then you've got this gash in the leg that just seems very out of place.
So what to make of this? First, the imperfection is subtle enough (it's on 1 frame out of 34) that everyone still seems to be able to see it rotate in both directions. Second, maybe I'm crazy, and the little gash doesn't imply what I think. Anyone have alternative explanations? Third, is that imperfection intentional? If so, why? It does not seem necessary, so I'd be curious to know if the creators knew about it, and what their intention was regarding it.
Finally, as far as the left brain versus right brain portion, I find that I don't really care, but I am interested in how the imperfection would affect this "test." This neuroscientist seems to be pretty adamant about the whole left/right thing being hogwash though:
...the notion that someone is "left-brained" or "right-brained" is absolute nonsense. All complex behaviours and cognitive functions require the integrated actions of multiple brain regions in both hemispheres of the brain. All types of information are probably processed in both the left and right hemispheres (perhaps in different ways, so that the processing carried out on one side of the brain complements, rather than substitutes, that being carried out on the other).At the very least, the traditional left/right brain theory is a wildly oversimplified version of what's really happening. The post also goes into the way the brain "fill in the gaps" for confusing visual information, thus allowing the illusion.
Update: Strange - the image appears to be rotating MUCH faster in Firefox than in Opera or IE. I wonder how that affects perception.
Posted by Mark on October 17, 2007 at 10:42 PM .: link :.
Wednesday, October 03, 2007
Groping and Probing
So a few recent installments of Shamus' new comic, Chainmail Bikini, has created a bit of controversy. The comics in question are actually a series of 3 (the fact that there are 3 is a key part of the controversy, but we'll get to that in a moment). Here they are: The controversy stems from the fact that there is a malicious groping in comic #6. Perhaps due to an ill-advised punchline ("improved stamina"), the discussion turned from one of groping and larping and into one of rape. And we all know how funny discussions of rape can get.
To be honest, I didn't find this particular arc in the comics very funny. However, I didn't find it very offensive either (though I can see why some might think so). Also, while I didn't find it especially funny, I do think it makes an interesting statement about gaming in general.
I don't tend to read web-comics the same way I read blogs. I tend to let several installments build up, and then read them all. So I didn't read this particular story arc until I knew about the controversy, and I must admit to a little bit of observer bias. Knowing there was a controversy colored my reading of the comic, and two things immediately struck me.
First is that while there is an element of one guy antagonizing his buddy, there is also an element of probing. By probing, I'm referring to exploration of the limits of a game and its possibilities. Steven Johnson's book Everything Bad is Good for You has a chapter on Video Games which covers this concept really well, and I recently wrote about it:
Probing is essentially exploration of the game and its possibilities. Much of this is simply the unconscious exploration of the controls and the interface, figuring out how the game works and how you're supposed to interact with it. However, probing also takes the more conscious form of figuring out the limitations of the game. For instance, in a racing game, it's usually interesting to see if you can turn your car around backwards, pick up a lot of speed, then crash head-on into a car going the "correct" way.Now again, in comic #6, one character is clearly attempting to antagonize his friend for choosing to role play a woman. However, I find it interesting that he chose to do so in such a way that is consistent with his character (who is a Chaotic Neutral barbarian) and followed the rules of the game (rolling die, etc...). According to the notes that accompany this arc, this sort of thing tends to happen when a campaign is not going well. If the players aren't having fun, they're going to make fun, and in if you're in a role playing game, they're going to do so by making their characters do something a little extreme. They don't do this because they are really extreme people, but because they want to see what happens. In short, they want to knock the game off it's boring rails. In this case, one player's character player groped another player's character. And from the aftermath in comics #7 and #8, you can see that things certainly got interesting. However, you also see that there were indeed consequences for the groping (one player physically assaults the other), and the comments that accompany each comic clearly attest that this is, in fact, a bad thing. To me, it's clear that the character in the comic is engaging in probing, but the comic also makes it clear that in a game that is as open-ended as D&D, it's possible to take things so far, which is why you saw a "real-world" reprisal (scare quotes due to the fact that this is a fictional comic, after all).
The second thing that struck me also had to do with the consequences. The situation immediately reminded me of this post from my friend Roy's feminist blog. He found this german poster which has a picture accompanied by this text:
Warning! Women defend themselves! If you leer at, catcall, or touch a woman, take into account that you might be loudly ridiculed, have a glass of beer poured over you, or be slapped in the face. Therefore, we strongly advise you to refrain from such harrassment!This is exactly what happend in comics #6 - #8. Well, not exactly. The comics actually take the consequences even further, while further abstracting the situation. Let me elaborate. The poster that Roy is pointing to is talking about real life situations. If you grope some woman at a bar, expect to be slapped in the face (or worse). What happened in the comics? An imaginary character who was role playing his own imaginary character groped another imaginary character that was being role played by yet another imaginary character. No one actually exists in this scenario, and yet there are indeed consequences for the groping. In fact, the consequences were the entire point of this character arc. So when I read comics #6-#8, I immediately saw them as a demonstration of Roy's poster. (Ironically, you could even read into this more, saying that the consequences have actually broken free of the imaginary world of Chainmail Bikini and taken root in the real world - in the form of a long comment thread and multiple blog postings like this one).
Now, if one were so inclined, I can see why this arc would be grating. Personally, it doesn't bother me, but I've never been groped (er, against my will) and I can certainly understand how that could be off-putting (I suppose an argument could be made that there are some other gender issues as well). And as an astute commenter at Shamus' site points out, a lot of why this comic doesn't work as humor is due to the structure of the story:
A lot of why this doesn't work well as humour, and why it's ended up annoying people, is to do with the structure of the comic. I think Shamus really struggled with fitting a potentially amusing gag into the three-panel format, and ultimately didn't manage it successfully.Shamus himself has noted that this explanation is not only accurate, but a good explanation as to why people are offended by what he essentially saw as a harmless joke. This makes sense to me. He wrote a strip that touched on a controversial subject in a humorous way, but then he was forced to cut it up and insert artificial punchlines, one of which implied more than he thought. From his point of view, the comic is basically the same as before, but just split up a little. All the sudden people start talking about rape and unsubscribing to the comic. I can see why he'd be a bit perplexed by even a reasonable objection to the comic.
I've never been a particularly great writer. When I was in high school, I always excelled at math and science, but never did especially well at english or writing. By college, I was much more comfortable with writing, and part of the reason for that was that I realized that writing isn't precise. Language is inherently vague and open to interpretation, and though there are some people who can wield language astoundingly well, most of us will open ourselves up to criticism simply by the act of experessing ourselves. One of my favorite quotes summarizes this well:
"To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!"Unfortunately, this simple miscommunication seems to have gotten lost in a thread of almost 200 comments. Some people have quit reading the comic altogether because of some perceived malice or ignorance on Shamus' part, others have taken to turning this into a divisive debate about rape. I don't want to start a holy war here, but when it comes to controversial stuff like this, I tend to give the creators the benefit of the doubt.
I think this whole controversy has brought up some interesting ideas, even if most have reduced it to a debate about rape. For instance, probing in games often takes the form of doing something extreme. My seemingly innocuous example above was turning your racecar around and driving the wrong direction to see what happens when you ram into another car. In real life, such an action would be catastrophic and could result in multiple deaths. Now, does doing something like that speak ill of me (the player)? How does wanton vehicular homicide compare to imaginary groping?
In my limited D&D gaming career, I played a Chaotic Evil thief who stole from his own party (i.e. one of my friends). Why did I do that? In real life, I'd never do such a thing. Why would I be interested in doing it in a role playing game? At a later point, I certainly suffered the consequences for my actions, and I think that's the rub. Playing games is all about setting up a paradigm, and sometimes half the fun is attempting to pull it down and find the holes in the paradigm, just to see what happens. I think that's a big part of why open-ended games like Grand Theft Auto are so popular. It's not the act of stealing a car or murdering a stranger that's fun, it's the act of attempting to derail the game. (Again, I touched on this in a post on game manuals.) In a recent discussion on what people like about Role Playing Games (also at Shamus' site), one of the most prominent answers was that good RPGs "...must give the player lots of freedom to make their own choices." One of the things I really hated about God of War (an otherwise awsome game) was that the character I was playing was a real prick. At one point, he goes out of his way to kill an innocent bystander (something about kicking him down into the hydra maybe? I don't remember specifically.) and that really annoyed me. What happened didn't bother me so much as the fact that I didn't have a choice in the matter. I don't really have an answer here, but I like games that give me a lot of freedom, because once I get bored by the forced or scripted aspects of the game, I can probe for weaknesses in the paradigm, and maybe even exploit them.
Update: I just noticed that Roy has tackled this subject on his blog. He seems quite disheartened by Shamus' post, though Roy wrote his post before the comment I quoted above was posted... My perception was that Shamus just couldn't understand why people were objecting... but once someone actually pointed out, in detail, why the humor doesn't work, he seemed to be more understanding (not only of why people were complaining, but of what people were suggesting by their complaints). But that's just me. I don't want to put words in Shamus' mouth, but as I already mentioned, I tend to give creators the benefit of the doubt.
Posted by Mark on October 03, 2007 at 07:55 PM .: link :.
Sunday, September 30, 2007
Kaedrin's own monkey research squad strikes again, with a pseduo-horror/Halloween theme. Enjoy:
Posted by Mark on September 30, 2007 at 10:15 PM .: link :.
Monday, September 24, 2007
We Could Be Heroes
Just for one day though. Apologies for the missing entry yesterday and the lame entry today. Time is still tight, so I'll just throw out a link to 5 Questions Season Two of Heroes Had Better F#@king Answer.
Unlike a certain show about people stranded on a mysterious island that we won't name, by the end of its first season NBC's hit series Heroes had managed to neatly wrap up the vast majority of its plot threads and running storylines. The cheerleader was saved; the sword was retrieved; and the exploding man was stopped. We didn't watch the finale of the mystery island show that we're not naming, but we wouldn't be surprised if Locke was left speechless by the sight of Patrick Duffy in the shower. Had it all been a dream?Some questions I have: Will they finally just get rid of Ali Larter's dumbass subplot? Which lame, cliched plot element will they get me to fall for anyway?
Update: The answer to my second question: Amnesia.
Posted by Mark on September 24, 2007 at 11:43 PM .: link :.
Sunday, September 16, 2007
Fantasy Football, 2007
As I mentioned earlier in the week, my schedule is pretty tight so my time for writing (and just about everything else) has been drastically reduced. So I'm just going to introduce my 2007 fantasy football team, the Star Wars Kids. I know most of my readers aren't big sports fans, but I can probably dash this off in a half hour, which I actually have enough time for. So I did very well last year, but my team peaked early and lost in the first round of the playoffs.
I was a little worried about this year. First, I had almost no time to prepare for the draft, which isn't usually a good sign. Second, the team I drafted seemed to be relying on a lot of "comeback" seasons (players who had a bad season or two due to injury or due to their team's performance, but who could make a comeback this year). Third, I ended up with a lackluster defense and my bench is a little weak. This is due to my position in the draft. I was last but the draft is a snake, so I had the 12th and 13th pick, but then had to wait for another 2 rounds for my next pick (36 overall). This position has its advantages, but it also meant that when a run on Defense/Special Teams happened, I ended up with scraps. Fourth, as an Eagles fan, I was frustrated by the fact that I ended up with Terrell Owens. He's a great performer, but on a personal level, I hate him. And he plays for the mortal enemy of the Eagles. I also have the Cowboys defense & special teams. Put simply, when the Eagles play the Cowboys, I'm going to be pretty conflicted.
Anyway, after one and half weeks here, it seems that the team I drafted is doing quite well for itself. Many of my gambles are paying off, and I may have underestimated some of my "sure things." So here's my team:
Update: Greg's draft didn't go as well as mine, but I think he'll make due.
Posted by Mark on September 16, 2007 at 07:43 PM .: link :.
Sunday, June 10, 2007
A few weeks ago, I wrote about how context matters when consuming art. As sometimes happens when writing an entry, that one got away from me and I never got around to the point I originally started with (that entry was originally entitled "Referential" but I changed it when I realized that I wasn't going to write anything about references), which was how much of our entertainment these days references its predecessors. This takes many forms, some overt (homages, parody), some a little more subtle.
I originally started thinking about this while watching an episode of Family Guy. The show is infamous for its random cutaway gags - little vignettes that have no connection to the story, but which often make some obscure reference to pop culture. For some reason, I started thinking about what it would be like to watch an episode of Family Guy with someone from, let's say, the 17th century. Let's further speculate that this person isn't a blithering idiot, but perhaps a member of the Royal Society or something (i.e. a bright fellow).
This would naturally be something of a challenge. There are some technical explanations that would be necessary. For example, we'd have to explain electricty, cable networks, signal processing and how the television works (which at least involves discussions on light and color). The concept of an animated show, at least, would probably be easy to explain (but it would involve a discussion of how the human eye works, to a degree).
There's more to it, of course, but moving past all that, once we start watching the show, we're going to have to explain why we're laughing at pretty much all of the jokes. Again, most of the jokes are simply references and parodies of other pieces of pop culture. Watching an episode of Family Guy with Isaac Newton (to pick a prominent Royal Society member) would necessitate a pause just about every minute to explain what each reference was from and why Family Guy's take on it made me laugh. Then there's the fact that Family Guy rarely has any sort of redeemable lesson and often deliberately skews towards actively encouraging evil (something along the lines of "I think the important thing to remember is that it's ok to lie, so long as you don't get caught." I don't think that exact line is in an episode, but it could be.) This works fine for us, as we're so steeped in popular culture that we get the fact that Family Guy is just lampooning of the notion that we could learn important life lessions via a half-hour sitcom. But I'm sure Isaac Newton would be appalled.
For some reason, I find this fascinating, and try to imagine how I would explain various jokes. For instance, the episode I was watching featured a joke concerning "cool side of the pillow." They cut to a scene in bed where Peter flips over the pillow and sees Billy Dee Williams' face, which proceeds to give a speech about how cool this side of the pillow is, ending with "Works every time." This joke alone would require a whole digression into Star Wars and how most of the stars of that series struggled to overcome their typecasting and couldn't find a lot of good work, so people like Billy Dee Williams ended up doing commercials for a malt liquor named Colt 45, which had these really cheesy commercials where Billy Dee talked like that. And so on. It could probably take an hour before my guest would even come close to understanding the context of the joke (I'm not even touching the tip of the iceberg with this post).
And the irony of this whole thing is that jokes that are explained simply aren't funny. To be honest, I'm not even sure why I find these simple gags funny (that, of course, is the joy of humor - you don't usually have to understand it or think about it, you just laugh). Seriously, why is it funny when Family Guy blatantly references some classic movie or show? Again, I'm not sure, but that sort of humor has been steadily growing over the past 30 years or so.
Not all comedies are that blatant about their referential humor though (indeed, Family Guy itself doesn't solely rely upon such references). A recent example of a good referential film is Shaun of the Dead, which somewhow manages to be both a parody and an example of a good zombie movie. It pays homage to all the classic zombie films and it also makes fun of other genres (notably the romantic comedy), but in doing so, the filmmakers have also made a good zombie movie in itself. The filmmakers have recently released a new film called Hot Fuzz, which attempts the same trick for action movies and buddy comedies. It is, perhaps, not as successful as Shaun, but the sheer number of references in the film is astounding. There are the obvious and explicit ones like Point Break and Bad Boys II, but there are also tons of subtle homages that I'd wager most people wouldn't get. For instance, when Simon Pegg yells in the movie, he's doing a pitch perfect impersonation of Arnold Schwarzenegger in Predator. And when he chases after a criminal, he imitates the way Robert Patrick's T-1000 runs from Terminator 2.
References don't need to be part of a comedy either (though comedies seem to make the easiest examples). Hop on IMDB and go to just about any recent movie, and click on the "Movie Connections" link in the left navigation. For instance, did you know that the aformentioned T2 references The Wizard of Oz and The Killing, amongst dozens of other references? Most of the time, these references are really difficult to pick out, especially when you're viewing a foreign film or show that's pulling from a different cultural background. References don't have to be story or character based - they can be the way a scene is composed or the way the lighting is set (i.e. the Venetian blinds in Noir films).
Now, this doesn't just apply to art either. A lot of common knowledge in today's world is referential. Most formal writing includes references and bibliographies, for instance, and a non-fiction book will often assume basic familiarity with a subject. When I was in school, I was always annoyed at the amount of rote memorization they made us do. Why memorize it if I could just look it up? Shouldn't you be focusing on my critical thinking skills instead of making me memorize arbitrary lists of facts? Sometimes this complaining was probably warranted, but most of it wasn't. So much of what we do in today's world requires a well-rounded familiarity with a large number of subjects (including history, science, culture, amongst many other things). There simply isn't any substitute for actual knowledge. Though it was a pain at the time, I'm glad emphasis was put on memorization during my education. A while back, David Foster noted that schools are actually moving away from this, and makes several important distinctions. He takes an example of a song:
Jakob Dylan has a song that includes the following lines:As Foster notes, this doesn't mean that "thinking skills" are unimportant, just that knowledge is important too. You need to have a quality data set in order to use those "thinking skills" effectively.
Human beings tend to leverage knowledge to create new knowledge. This has a lot of implications, one of which is intellectual property law. Giving limited copyright to intellectual property is important, because the data in that property eventually becomes available for all to built upon. It's ironic that educators are considering less of a focus on memorization, as this requirement of referential knowledge has been increasing for some time. Students need a base of knowledge to both understand and compose new works. References help you avoid reinventing the wheel everytime you need to create something, which leads to my next point.
I think part of the reason references are becoming more and more common these days is that it makes entertainment a little less passive. Watching TV or a movie is, of course, a passive activity, but if you make lots of references and homages, the viewer is required to think through those references. If the viewer has the appropriate knowledge, such a TV show or movie becomes a little more cognitively engaging. It makes you think, it calls to mind previous work, and it forces you to contextualize what you're watching based on what you know about other works. References are part of the complexity of modern Television and film, and Steven Johnson spends a significant amout of time talking about this subject in his book Everything Bad is Good for You (from page 85 of my edition):
Nearly every extended sequence in Seinfeld or The Simpsons, however, will contain a joke that makes sense only if the viewer fills in the proper supplementary information -- information that is deliberately withheld from the viewer. If you haven't seen the "Mulva" episode, or if the name "Art Vandelay" means nothing to you, then the subsequent references -- many of them arriving years after their original appearance -- will pass on by unappreciated.I know some people who hate Family Guy and Seinfeld, but I realized a while ago that they don't hate those shows because of the contents of the shows or because they were offended (though some people certainly are), but rather becaues they simply don't get the references. They didn't grow up watching TV in the 80s and 90s, so many of the references are simply lost on them. Family Guy would be particularly vexing if you didn't have the pop culture knowledge of the writers of that show. These reference heavy shows are also a lot easier to watch and rewatch, over and over again. Why? Because each episode is not self-contained, you often find yourself noticing something new every time you watch. This also sometimes works in reverse. I remember the first time I saw Bill Shatner's campy rendition of Rocket Man, I suddenly understoood a bit on Family Guy which I thought was just a bit based on being random (but was really a reference).
Again, I seem to be focusing on comedy, but it's not necessarily limited to that genre. Eric S. Raymond has written a lot about how science fiction jargon has evolved into a sophisticated code that implicitely references various ideas, conventions and tropes of the genre:
In looking at an SF-jargon term like, say, "groundcar", or "warp drive" there is a spectrum of increasingly sophisticated possible decodings. The most naive is to see a meaningless, uninterpretable wordlike noise and stop there.While comedy makes for convenient examples, I think this better illustrates the cognitive demands of referential art. References require you to be grounded in various subjects, and they'll often require you to think through the implications of those subjects in a new context. References allow writers to pack incredible amounts of information into even the smallest space. This, of course, requires the consumer to decode that information (using available knowledge and critical thinking skills), making the experience less passive and more engaging. Use references will continue to flourish and accellerate in both art and scholarship, and new forms will emerge. One could even argue that aggregation in various weblogs are simply exercises in referential work. Just look at this post, in which I reference several books and movies, in many cases assuming familiarity. Indeed, the whole structure of the internet is based on the concept of links -- essentialy a way to reference other documents. Perhaps this is part of the cause of the rising complexity and information density of modern entertainment. We can cope with it now, because we have such systems to help us out.
Posted by Mark on June 10, 2007 at 03:08 PM .: link :.
Wednesday, May 09, 2007
Last week, I hastily threw together a post on Coke, including some thoughts on Coke vs. Pepsi, the advertising of both brands, and Passover Coke. I've run across several people commenting on my post or similar issues over the past week.
Awesome. Ok, I cheated a little. I already had the normal size bottles on the left, but still, that's an impressive array of beer. Looks like I've got some work to do!
Wednesday, May 02, 2007
Link Dump: Coca-Cola Edition
I love Coca-Cola. I hate Pepsi. I probably wouldn't feel like that if it weren't for my parents. My brother prefers Pepsi. For reasons beyond my understanding, my parents nurtured this conflict. This is strange, since they generally just bought what was on sale (and we were growing up during the whole cola wars episode, so there were lots of sales). This manifested in various ways throughout the years, but the end result is that our preferences polarized. When I go to a restaurant and ask for a "Coke" and they ask if Pepsi is ok, I generally change my order to something else (root beer, water, etc...) Now, I'm not rude or even very confrontational about it, but this guy sure is:
"I'd like a Coca-Cola, please," I told the waiter.Now, I've seen people say "No, Pepsi is not ok," but asking for the waitress to run down to the 7-11 is pure, diabolical genius. Still, most of us Coke fiends aren't rude about our preferences. Take John Scalzi, who wrote a great Essay on Coca-Cola a while ago, and delved into the advertising of Coke and Pepsi:
I think there really is something to how Coke positions itself. One hates to admit that one is influenced by corporate branding -- it means that those damned advertisers actually managed to do their job -- but what can you say. It works. Since Coke is the market leader, it doesn't spend any time as far as I can see banging on Pepsi or other brands; its ads stick to their knitting, which is making sure that people feel that Coke is part of everyday life -- and at some point during your day, you're probably going to have a Coke. It's inevitable. And hey -- that's okay. That's as it should be, in fact. I don't know that I would call Coke's ads soft sells (after all, they brand the product literally up the wazoo), but I don't find the advertising utterly annoying.And it goes on for a bit too. Great article.
This year, I learned about the existence of Passover Coke. The current Coke formula uses corn syrup as a sweetener because it's cheaper than pure cane sugar, but since it's not Kosher to eat corn during Passover, Coke makes some special batches of cola using pure cane sugar. It's only available in limited quantities for a few weeks a year (you can tell because it's got a yellow cap and Hebrew writing on it). I didn't get a chance to do a taste test this year, but Widge did, and he says that people prefer Passover Coke to regular Coke. This, of course, leads him to make the obvious suggestion:
Look. I know it's easier to work with and cheaper and all that good stuff. But let's face it: consumers are trying to get away from the high fructose stuff. I don't pretend to even understand all the health controversy that's going on, I tried to read up on the Wikipedia article before writing this and it mentioned "plasma triacylglycerol" and my eyes sort of glazed over (mmmm, glaze). It sounds like something the crew of Star Trek Voyager would seek out while being chased by cauliflower-headed aliens. But forget all that: it just freaking tastes better. That's all I care about, because if I was really concerned about my health, why would I be drinking Coke?I'd buy it. Good stuff.
Wednesday, April 18, 2007
Link Dump: Awesome Pictures Edition
Yes, time is still short these days, so just a few links featuring lots and lots of pictures:
Wednesday, March 14, 2007
As I waded through dozens of recommendations for Anime series (thanks again to everyone who contributed), I began to wonder about a few things. Anime seems to be a pretty vast subject and while I had touched the tip of the iceberg in the past, I really didn't have a good feel for what was available. So I asked for recommendations, and now I'm on my way. But it's not like I just realized that I wanted to watch more Anime. I've wanted to do that for a little while, but I've only recently acted on it. What took so long? Why is it so hard to get started?
This isn't something that's limited to deciding what to watch either. I find that just getting started is often the most difficult part of a task (or, at least, the part I seem to get stuck on the most). Sometimes it's difficult to deal with the novelty of a thing, other times a project seems completely overwhelming. But after I've begun, things don't seem so novel or overwhelming anymore. I occasionally find myself hesitant to start a new book or load up a new video game, but once I do, things flow pretty easily (unless the book or game is a really bad one). I have a bunch of ideas for blog posts that I never get around to attacking, but usually once I start writing, ideas flow much more readily. At work, I'll sometimes find myself struggling to get started on a task, but once I get past that initial push, I'm fine. Sure, there are excuses for all of these (interruptions, email, and meetings, for instance), but while they are sometimes true obstacles, they often strike me as rationalizations. Just getting started is the problem, but once I get into the flow, it's easy to keep going.
Joel Spolsky wrote an excellent essay on the subject called Fire and Motion:
Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I've got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don't realize that it's already 7:30 pm.It's an excellent point, and there does seem to be some sort of mental inertia at work here. But why? Why is it so difficult to get started?
When I think about this, I realize that this is a relatively new phenomenon for me. I don't remember having this sort of difficulty ten years ago. What's different? Well, I'm ten years older. The conventional wisdom is that it becomes more difficult to learn new things (i.e. to start something new) as you get older. There is some supporting evidence having to do with how the human brain becomes less malleable with time, but I'm not sure that paints the full picture. I think a big part of the problem is that as I got older, my standards rose.
Let me back up for a moment. A few years ago, a friend attempted to teach me how to drive a stick. I'd driven a automatic transmission my whole life up until that point, so the process of learning a manual transmission proved to be a challenging one. The actual mechanics of it are pretty straightforward and easily internalized. Sitting down and actually doing it, though, was another story. Intellectually, I knew what was going on, but it can be a little difficult to overcome muscle memory. I had a lot of trouble at first (and since I haven't driven a stick since then, I'd probably still have a lot of trouble today) and got extremely frustrated. My friend (who had gone through the same thing herself) laughed at it, making my lack of success even more infuriating. Eventually she explained to me that it wasn't that I was doing a bad job. It was that I was so used to being able to pick up something new and run with it, that when I had to do something extra challenging that took a little longer to pick up, I became frustrated. In short, I had higher standards for myself than I should have.
I think, perhaps, that's why it's difficult to start something new. It's not that learning has become harder, it's that I've become less tolerant of failure. My standards are higher, and that will sometimes make it hard to start something. This post, for example, has been brewing in my head for a while, but I had trouble getting started. This happens all the time, and I've actually got a bunch of ideas for posts stashed away somewhere. I've even written about this before, though only in a tangential way:
This weblog has come a long way over the three and a half years since I started it, and at this point, it barely resembles what it used to be. I started out somewhat slowly, just to get an understanding of what this blogging thing was and how to work it (remember, this was almost four years ago and blogs weren't nearly as common as they are now), but I eventually worked up into posting about once a day, on average. At that time, a post consisted mainly of a link and maybe a summary or some short commentary. Then a funny thing happened, I noticed that my blog was identical to any number of other blogs, and thus wasn't very compelling. So I got serious about it, and started really seeking out new and unusual things. I tried to shift focus away from the beaten path and started to make more substantial contributions. I think I did well at this, but it couldn't really last. It was difficult to find the offbeat stuff, even as I poured through massive quantities of blogs, articles and other information (which caused problems of it's own). I slowed down, eventually falling into an extremely irregular posting schedule on the order of once a month, which I have since attempted to correct, with, I hope, some success. I recently noticed that I have been slumping somewhat, though I'm still technically keeping to my schedule.Part of the reason I was slumping back then was that my standards were rising again. The problem is that I want what I write to turn out good, and my standards are high (relatively speaking - this is only a blog, after all). So when I sit down to write, I wonder if I'll actually be able to do the subject justice. At a certain point, though, you just have to pull the trigger and get started. The rest comes naturally. Is this post better than I had imagined? Probably not, but then, if I waited until it was perfect, I'd never post anything (and plus, that sorta defeats the purpose of blogging).
One of the things I've noticed since changing my schedule to post at least twice a week is that it forces me to lower my standards a bit, just so that I can get something out on time. Back when I started the one post a week schedule, I found that those posts were getting pretty long. I thought they were pretty good too, but as time went on, I wasn't able to keep up with my rising expectations. There's nothing inherently wrong with high expectations, but I've found it's good every now and again to adjust course. Even a well made clock drifts and must be calibrated from time to time, and so we must calibrate ourselves from time to time as well.
Update 3.15.07: It occurs to me that this post is overly-serious and may give you the wrong idea. In the comments, Pete notes that watching Anime is supposed to be fun. I agree wholeheartedly, and I didn't mean to imply differently. The same goes for blogging - I wrote a decent amount in this post about how blogging is difficult for me, but that's not really the right way to put it. I enjoy blogging too, that's why I do it. Sometimes I overthink things, and that's probably what I was doing in this post, but I think the main point holds. Learning can be impaired by high standards.
Wednesday, February 14, 2007
Intellectual Property, Copyright and DRM
Roy over at 79Soul has started a series of posts dealing with Intellectual Property. His first post sets the stage with an overview of the situation, and he begins to explore some of the issues, starting with the definition of theft. I'm going to cover some of the same ground in this post, and then some other things which I assume Roy will cover in his later posts.
I think most people have an intuitive understanding of what intellectual property is, but it might be useful to start with a brief definition. Perhaps a good place to start would be Article 1, Section 8 of the U.S. Constitution:
To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;I started with this for a number of reasons. First, because I live in the U.S. and most of what follows deals with U.S. IP law. Second, because it's actually a somewhat controversial stance. The fact that IP is only secured for "limited times" is the key. In England, for example, an author does not merely hold a copyright on their work, they have a Moral Right.
The moral right of the author is considered to be -- according to the Berne convention -- an inalienable human right. This is the same serious meaning of "inalienable" the Declaration of Independence uses: not only can't these rights be forcibly stripped from you, you can't even give them away. You can't sell yourself into slavery; and neither can you (in Britain) give the right to be called the author of your writings to someone else.The U.S. is different. It doesn't grant an inalienable moral right of ownership; instead, it allows copyright. In other words, in the U.S., such works are considered property (i.e. it can be sold, traded, bartered, or given away). This represents a fundamental distinction that needs to be made: some systems emphasize individual rights and rewards, and other systems are more limited. When put that way, the U.S. system sounds pretty awful, except that it was designed for something different: our system was built to advance science and the "useful arts." The U.S. system still rewards creators, but only as a means to an end. Copyright is granted so that there is an incentive to create. However, such protections are only granted for "limited Times." This is because when a copyright is eternal, the system stagnates as protected peoples stifle competition (this need not be malicious). Copyright is thus limited so that when a work is no longer protected, it becomes freely available for everyone to use and to build upon. This is known as the public domain.
The end goal here is the advancement of society, and both protection and expiration are necessary parts of the mix. The balance between the two is important, and as Roy notes, one of the things that appears to have upset the balance is technology. This, of course, extends as far back as the printing press, records, cassettes, VHS, and other similar technologies, but more recently, a convergence between new compression techniques and increasing bandwidth of the internet created an issue. Most new recording technologies were greeted with concern, but physical limitations and costs generally put a cap on the amount of damage that could be done. With computers and large networks like the internet, such limitations became almost negligible. Digital copies of protected works became easy to copy and distribute on a very large scale.
The first major issue came up as a result of Napster, a peer-to-peer music sharing service that essentially promoted widespread copyright infringement. Lawsuits followed, and the original Napster service was shut down, only to be replaced by numerous decentralized peer-to-peer systems and darknets. This meant that no single entity could be sued for the copyright infringement that occurred on the network, but it resulted in a number of (probably ill-advised) lawsuits against regular folks (the anonymity of internet technology and state of recordkeeping being what it is, this sometimes leads to hilarious cases like when the RIAA sued a 79 year old guy who doesn't even own a computer or know how to operate one).
Roy discusses the various arguments for or against this sort of file sharing, noting that the essential difference of opinion is the definition of the word "theft." For my part, I think it's pretty obvious that downloading something for free that you'd normally have to pay for is morally wrong. However, I can see some grey area. A few months ago, I pre-ordered Tool's most recent album, 10,000 Days from Amazon. A friend who already had the album sent me a copy over the internet before I had actually recieved my copy of the CD. Does this count as theft? I would say no.
The concept of borrowing a Book, CD or DVD also seems pretty harmless to me, and I don't have a moral problem with borrowing an electronic copy, then deleting it afterwords (or purchasing it, if I liked it enough), though I can see how such a practice represents a bit of a slippery slope and wouldn't hold up in an honest debate (nor should it). It's too easy to abuse such an argument, or to apply it in retrospect. I suppose there are arguments to be made with respect to making distinctions between benefits and harms, but I generally find those arguments unpersuasive (though perhaps interesting to consider).
There are some other issues that need to be discussed as well. The concept of Fair Use allows limited use of copyrighted material without requiring permission from the rights holders. For example, including a screenshot of a film in a movie review. You're also allowed to parody copyrighted works, and in some instances make complete copies of a copyrighted work. There are rules pertaining to how much of the copyrighted work can be used and in what circumstances, but this is not the venue for such details. The point is that copyright is not absolute and consumers have rights as well.
Another topic that must be addressed is Digital Rights Management (DRM). This refers to a range of technologies used to combat digital copying of protected material. The goal of DRM is to use technology to automatically limit the abilities of a consumer who has purchased digital media. In some cases, this means that you won't be able to play an optical disc on a certain device, in others it means you can only use the media a certain number of times (among other restrictions).
To be blunt, DRM sucks. For the most part, it benefits no one. It's confusing, it basically amounts to treating legitimate customers like criminals while only barely (if that much) slowing down the piracy it purports to be thwarting, and it's lead to numerous disasters and unintended consequences. Essential reading on this subject is this talk given to Microsoft by Cory Doctorow. It's a long but well written and straightforward read that I can't summarize briefly (please read the whole thing). Some details of his argument may be debateable, but as a whole, I find it quite compelling. Put simply, DRM doesn't work and it's bad for artists, businesses, and society as a whole.
Now, the IP industries that are pushing DRM are not that stupid. They know DRM is a fundamentally absurd proposition: the whole point of selling IP media is so that people can consume it. You can't make a system that will prevent people from doing so, as the whole point of having the media in the first place is so that people can use it. The only way to perfectly secure a piece of digital media is to make it unusable (i.e. the only perfectly secure system is a perfectly useless one). That's why DRM systems are broken so quickly. It's not that the programmers are necessarily bad, it's that the entire concept is fundamentally flawed. Again, the IP industries know this, which is why they pushed the Digital Millennium Copyright Act (DMCA). As with most laws, the DMCA is a complex beast, but what it boils down to is that no one is allowed to circumvent measures taken to protect copyright. Thus, even though the copy protection on DVDs is obscenely easy to bypass, it is illegal to do so. In theory, this might be fine. In practice, this law has extended far beyond what I'd consider reasonable and has also been heavily abused. For instance, some software companies have attempted to use the DMCA to prevent security researchers from exposing bugs in their software. The law is sometimes used to silence critics by threatening them with a lawsuit, even though no copright infringement was committed. The Chilling Effects project seems to be a good source for information regarding the DMCA and it's various effects.
DRM combined with the DMCA can be stifling. A good example of how awful DRM is, and how DMCA can affect the situation is the Sony Rootkit Debacle. Boing Boing has a ridiculously comprehensive timeline of the entire fiasco. In short, Sony put DRM on certain CDs. The general idea was to prevent people from putting the CDs in their computer and ripping them to MP3s. To accomplish this, Sony surreptitiously installed software on customer's computers (without their knowledge). A security researcher happened to notice this, and in researching the matter found that the Sony DRM had installed a rootkit that made the computer vulnerable to various attacks. Rootkits are black-hat cracker tools used to disguise the workings of their malicious software. Attempting to remove the rootkit broke the windows installation. Sony reacted slowly and poorly, releasing a service pack that supposedly removed the rootkit, but which actually opened up new security vulnerabilities. And it didn't end there. Reading through the timeline is astounding (as a result, I tend to shy away from Sony these days). Though I don't believe he was called on it, the security researcher who discovered these vulnerabilities was technically breaking the law, because the rootkit was intended to protect copyright.
A few months ago, my windows computer died and I decided to give linux a try. I wanted to see if I could get linux to do everything I needed it to do. As it turns out, I could, but not legally. Watching DVDs on linux is technically illegal, because I'm circumventing the copy protection on DVDs. Similar issues exist for other media formats. The details are complex, but in the end, it turns out that I'm not legally able to watch my legitimately purchased DVDs on my computer (I have since purchased a new computer that has an approved player installed). Similarly, if I were to purchase a song from the iTunes Music Store, it comes in a DRMed format. If I want to use that format on a portable device (let's say my phone, which doesn't support Apple's DRM format), I'd have to convert it to a format that my portable device could understand, which would be illegal.
Which brings me to my next point, which is that DRM isn't really about protecting copyright. I've already established that it doesn't really accomplish that goal (and indeed, even works against many of the reasons copyright was put into place), so why is it still being pushed? One can only really speculate, but I'll bet that part of the issue has to do with IP owners wanting to "undercut fair use and then create new revenue streams where there were previously none." To continue an earlier example, if I buy a song from the iTunes music store and I want to put it on my non-Apple phone (not that I don't want one of those), the music industry would just love it if I were forced to buy the song again, in a format that is readable by my phone. Of course, that format would be incompatible with other devices, so I'd have to purchase the song again if I wanted to listen to it on those devices. When put in those terms, it's pretty easy to see why IP owners like DRM, and given the general person's reaction to such a scheme, it's also easy to see why IP owners are always careful to couch the debate in terms of piracy. This won't last forever, but it could be a bumpy ride.
Interestingly enough, distributers of digital media like Apple and Yahoo have recently come out against DRM. For the most part, these are just symbolic gestures. Cynics will look at Steve Jobs' Thoughts on Music and say that he's just passing the buck. He knows customers don't like or understand DRM, so he's just making a calculated PR move by blaming it on the music industry. Personally, I can see that, but I also think it's a very good thing. I find it encouraging that other distributers are following suit, and I also hope and believe this will lead to better things. Apple has proven that there is a large market for legally purchased music files on the internet, and other companies have even shown that selling DRM-free files yields higher sales. Indeed, the emusic service sells high quality, variable bit rate MP3 files without DRM, and it has established emusic as the #2 retailer of downloadable music behind the iTunes Music Store. Incidentally, this was not done for pure ideological reasons - it just made busines sense. As yet, these pronouncements are only symbolic, but now that online media distributers have established themselves as legitimate businesses, they have ammunition with which to challenge the IP holders. This won't happen overnight, but I think the process has begun.
Last year, I purchased a computer game called Galactic Civilizations II (and posted about it several times). This game was notable to me (in addition to the fact that it's a great game) in that it was the only game I'd purchased in years that featured no CD copy protection (i.e. DRM). As a result, when I bought a new computer, I experienced none of the usual fumbling for 16 digit CD Keys that I normally experience when trying to reinstall a game. Brad Wardell, the owner of the company that made the game, explained his thoughts on copy protection on his blog a while back:
I don't want to make it out that I'm some sort of kumbaya guy. Piracy is a problem and it does cost sales. I just don't think it's as big of a problem as the game industry thinks it is. I also don't think inconveniencing customers is the solution.For him, it's not that piracy isn't an issue, it's that it's not worth imposing draconian copy protection measures that infuriate customers. The game sold much better than expected. I doubt this was because they didn't use DRM, but I can guarantee one thing: People don't buy games because they want DRM. However, this shows that you don't need DRM to make a successful game.
The future isn't all bright, though. Peter Gutmann's excellent Cost Analysis of Windows Vista Content Protection provides a good example of how things could get considerably worse:
Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called "premium content", typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it's not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server).This is infuriating. In case you can't tell, I've never liked DRM, but at least it could be avoided. I generally take articles like the one I'm referencing with a grain of salt, but if true, it means that the DRM in Vista is so oppressive that it will raise the price of hardware And since Microsoft commands such a huge share of the market, hardware manufacturers have to comply, even though a some people (linux users, Mac users) don't need the draconian hardware requirements. This is absurd. Microsoft should have enough clout to stand up to the media giants, there's no reason the DRM in Vista has to be so invasive (or even exist at all). As Gutmann speculates in his cost analysis, some of the potential effects of this are particularly egregious, to the point where I can't see consumers standing for it.
My previous post dealt with Web 2.0, and I posted a YouTube video that summarized how changing technology is going to force us to rethink a few things: copyright, authorship, identity, ethics, aesthetics, rhetorics, governance, privacy, commerce, love, family, ourselves. All of these are true. Earlier, I wrote that the purpose of copyright was to benefit society, and that protection and expiration were both essential. The balance between protection and expiration has been upset by technology. We need to rethink that balance. Indeed, many people smarter than I already have. The internet is replete with examples of people who have profited off of giving things away for free. Creative Commons allows you to share your content so that others can reuse and remix your content, but I don't think it has been adopted to the extent that it should be.
To some people, reusing or remixing music, for example, is not a good thing. This is certainly worthy of a debate, and it is a discussion that needs to happen. Personally, I don't mind it. For an example of why, watch this video detailing the history of the Amen Break. There are amazing things that can happen as a result of sharing, reusing and remixing, and that's only a single example. The current copyright environment seems to stifle such creativity, not the least of which because copyright lasts so long (currently the life of the author plus 70 years). In a world where technology has enabled an entire generation to accellerate the creation and consumption of media, it seems foolish to lock up so much material for what could easily be over a century. Despite all that I've written, I have to admit that I don't have a definitive answer. I'm sure I can come up with something that would work for me, but this is larger than me. We all need to rethink this, and many other things. Maybe that Web 2.0 thing can help.
Update: This post has mutated into a monster. Not only is it extremely long, but I reference several other long, detailed documents and even somewhere around 20-25 minutes of video. It's a large subject, and I'm certainly no expert. Also, I generally like to take a little more time when posting something this large, but I figured getting a draft out there would be better than nothing. Updates may be made...
Update 2.15.07: Made some minor copy edits, and added a link to an Ars Technica article that I forgot to add yesterday.
Posted by Mark on February 14, 2007 at 11:44 PM .: link :.
Wednesday, January 31, 2007
Samoas versus Caramel deLites
My favorite Girl Scout cookies are unquestionably the Samoas (Thin Mints and Tagalongs are also quite good, but nothing compares to the mighty Samoa). Several years ago, I went to purchase a box and was surprised to learn that they changed the name to Caramel deLites. And they seemed to taste different too! It didn't take long to notice that Samoas were still being sold, and as it turns out, there are two commercial bakeries that are licensed to make Girl Scout cookies. Little Brownie Bakers have the strange names that we are nonetheless familiar with: Samoas, Tagalongs, Do-si-dos, Trefoils, etc... ABC Bakers are much more prosaic and descriptive: Caramel deLites, Peanut Butter Patties, Peanut Butter Sandwiches, Shortbread, etc...
Generally, both bakeries are pretty good, but the question is, what are the differences and which are better? Let's take a look at Samoas versus Caramel deLites.
The Caramel deLites are on the left, and the Samoas are on the right. As you can see, the Caramel deLites have a somewhat lighter color to them, and that's partially because they use milk chocolate as opposed to dark chocolate. Wikipedia says they don't have as much caramel as Samoas, but I'm not sure about that. Personally, I think they're chewier than Samoas, and if I had to choose, I'd choose Samoas. But maybe I'm just weird. I asked around, and there didn't seem to be a consensus. Some people loved one variety, others loved the other, most were indifferent.
So I did a test. I put one box of each on my desk, removed any identification, and put a note up that asked people to try one of each and vote for their preferred cookie. This was a single blind test, and the cookies were labeled only A and B. Ok, so it was hardly a stringent methodology and a lot of people knew which were which just by looking at them, but in the end, it appears that Samoas have a slight edge. A sample size of 8 people is statistically significant enough for me, and it came out 5-3 in favor of Samoas. So there, Samoas are empirically better than Caramel deLites. It's scientific!
A couple of us also compared the Thin Mints (which are the only ones I know of that have the same name no matter what baker), but results were mixed. The cookies are clearly different, and the ABC Bakers (the ones with the prosaic names) Thin Mint actually seems more minty, but they're both pretty good. No stats for this one, but anecdotal evidence suggests that people like the ABC Bakers version better. So there you go. They're both good.
Incidentally, if you can get your hands on Edy's® Girl Scouts® Samoas® Cookie Ice Cream, I highly recommend stocking up. It's available slightly longer than the cookies are, but it'll be gone by March, and it's quite possible the greatest ice cream ever created.
Wednesday, January 03, 2007
Japanese Cootie Shots
One of the things that interests me about foreign films is the way various aspects of culture become lost in the translation to English. In some cases, this is due to the literal translation of dialogue, but in others it's due to a physical mannerism or custom that simply can't be translated. In a post about Lain's Bear Pajamas in the Anime series Serial Experiments Lain, I mention an example of such a gesture that appears in Miyazaki's Spirited Away. Of course, I got the details of the gesture completely wrong in that post, but the general concept is similar. Since Spirited Away is the next film in the Animation Marathon, I got the DVD and took some screenshots. The main character, a little girl named Chihiro, steps on a little black slug and the boiler room man, Kamaji, says that this is gross and will bring bad luck. So she turns around and puts her thumbs and forefingers together while he pushes his hand through (click the images for a larger version).
Now this is obviously some sort of gesture meant to counteract bad luck, but it's a little strange. The dialogue in the scene helps, though the subtitles and the dubbing differ considerably (as I have been noticing lately). The subtitled version goes like this:
KAMAJI: Gross, gross, Sen! Totally gross!Quite sparse, though the meaning is relatively clear. The dubbed version expands on the concept a little more:
KAMAJI: You killed it! Those things are bad luck. Hurry, before it rubs off on you! Put your thumbs and forefingers together.I noticed this gesture the first time I saw the movie, because I thought it was stange and figured that there had to be a little more to it than what was really being translated. On the DVD there is a little featurette called The Art of 'Spirited Away' and in one of the sections, the translators mention that they were baffled by the gesture, and weren't sure how to translate it. After researching the issue, they concluded that it's essentially the Japanese equivalent to a cootie shot. Of course, this makes a lot of sense, and it's totally something a kid would do in response to stepping on something gross (this film, like many of Miyazaki's other films, seems to nail a lot of the details of what it's like to be a kid). It also illustrates that the boiler room man isn't quite as gruff as he appears, and that he even has a bit of a soft spot for children. Interestingly enough, this gesture is repeated again by a little mouse (I think it's a mouse), and the soot balls that work in the boiler room, though I don't remember that (I'll try to grab screenshots when I rewatch the whole film)
Again, Spirited Away is the next film in the Animation Marathon, and it's probably the best of the bunch as well. Expect a full review soon, though I'm not sure how detailed it will be. Filmspotting (the podcast that's actually running the marathon) is on a bit of a break from the marathon, as they're doing their obligatory 2006 wrap up shows and best of the year lists.
Sunday, December 24, 2006
In the future, pine trees will be extinct, and then what will we do for Christmas trees? We'll use a cactus. I present you with this year's Traditional Kaedrin Christmas Cactus:
The picture didn't turn out as well as last year (it keeps coming out fuzzy for some reason, perhaps because of all the extra lights or because of the lighting - hey look, a handy guide for taking pictures of Christmas lights), but it'll do well enough.
Moving on, a few other christmas links for your enjoyment:
Tuesday, December 19, 2006
It was only a fantasy...
I've never been much of a sports fan, but in recent years I have become a fantasy sports fan. The funny thing about fantasy sports is that it totally distorts the importance of events in games. Take, for instance, last week's Monday Night Football game. We were nearing playoff time in fantasy football. My roommate and I were dominating the league, and had clinched playoff spots. There was one other team with a winning record who had also clinched. And there were 2 teams in contention for the final playoff spot. It's a head-to-head league, and I was playing one of the 2 teams. Due to some bad performances by key members of my team (*cough, cough, Tom Brady, cough*), I was down by 5 points by the end of the Sunday games. He had no players remaining, but I had 1 person playing in the Monday night football game. There's just one problem: he's a kicker - not a position known for high scoring. A kicker gets 1 fantasy point for every extra point they kick, and field goals can be 3-6 points (depending on how far the kick is from). So basically, what you had last week was 4 or 5 people throughout the northeast intensely following and rooting for (or against)... a kicker.
Me: They're in field goal range! Call in Wilkins!As luck would have it, I lost. However, I was still in the playoffs and I ended up playing the same person I would have played anyway. Alas, it appears that my team peaked early. After going 12-1 during the first 13 weeks of play, I've gone 0-2 in the past two weeks. I lost in the first round of the playoffs. There may still be some hope for placing third place, but I must concede that my season didn't end the way I planned. The main culprit here was injuries, as my top Wide Reciever and another solid Running Back both went down in recent weeks, thus weakening my team considerably. Nevertheless, I bear my team no ill will, and so I'll let the Badgers take a bow:
Sunday, December 17, 2006
Just Do It
In Paul Graham's essay Made in USA, he writes about America's tendencies towards design.
Americans are good at some things and bad at others. We're good at making movies and software, and bad at making cars and cities. And I think we may be good at what we're good at for the same reason we're bad at what we're bad at. We're impatient. In America, if you want to do something, you don't worry that it might come out badly, or upset delicate social balances, or that people might think you're getting above yourself. If you want to do something, as Nike says, just do it.It's amazing how well the "Just Do It" marketing line fits America (the only other tagline that works as well is EA Sports' "If it's in the game, it's in the game" line), and Graham is certainly right about how that affects programmers. I've noticed that there are really two different types of programmers: people who look stuff up, and people who just try it to see if it works. People ask me questions about HTML or CSS all the time. Sometimes I know the answer, sometimes I dont, but most of the time my response is "Have you tried it to see what happens?" HTML is pretty simple, and it's easy to test out various concepts. There's no reason not to, and I'll also note that trying it is also the best way to learn. I'm reminded of this design parable about a ceramics class:
The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the "quantity" group: fifty pound of pots rated an "A", forty pounds a "B", and so on. Those being graded on "quality", however, needed to produce only one pot -albeit a perfect one - to get an "A". Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the "quantity" group was busily churning out piles of work - and learning from their mistakes - the "quality" group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.There are several interesting things about this. First, as Graham notes in his essay, good craftsmanship means working fast and iterating your design. Second, failure isn't a bad thing in this story. In fact, failure is a necessary component of success. In such a scenario, people who work fast and iterate do much better than people who meticulously plan their designs. As Graham belabors in his essay, this works for some things, not not others.
Of course, not all American designs are bad, and Graham mentions the obvious exception:
Apple is an interesting counterexample to the general American trend. If you want to buy a nice CD player, you'll probably buy a Japanese one. But if you want to buy an MP3 player, you'll probably buy an iPod. What happened? Why doesn't Sony dominate MP3 players?It's because Apple is obsessed with good design ("Or more precisely, their CEO is.") Interestingly, I think one of the reasons the iPod is so successful is that Apple understands the paradox of choice really well. The iPod isn't and has never really been the leader in terms of features or functionality. But it does what it does extremely well, and I think that's partly because the iPod is actually quite simple. If you loaded it up with all sorts of extra features, there's no way you'd be able to keep the simplicity of the interface, and that would make it harder to use, and much less attactive.
In the end, I don't know that I agree with everything in Graham's essay, but his stuff is always worth reading.
Sunday, October 22, 2006
The Paradox of Choice
At the UI11 Conference I attended last week, one of the keynote presentations was made by Barry Schwartz, author of The Paradox of Choice: Why More Is Less. Though he believes choice to be a good thing, his presentation focused more on the negative aspects of offering too many choices. He walks through a number of examples that illustrate the problems with our "official syllogism" which is:
So how do we react to all these choices? Luke Wroblewski provides an excellent summary, which I will partly steal (because, hey, he's stealing from Schwartz after all):
Another example is my old PC which has recently kicked the bucket. I actually assembled that PC from a bunch of parts, rather than going through a mainstream company like Dell, and the number of components available would probably make the Circuit City stereo example I gave earlier look tiny by comparison. Interestingly, this diversity of choices for PCs is often credited as part of the reason PCs overtook Macs:
Back in the early days of Macintoshes, Apple engineers would reportedly get into arguments with Steve Jobs about creating ports to allow people to add RAM to their Macs. The engineers thought it would be a good idea; Jobs said no, because he didn't want anyone opening up a Mac. He'd rather they just throw out their Mac when they needed new RAM, and buy a new one.But as Schwartz would note, the amount of choices in assembling your own computer can be stifling. This is why computer and software companies like Microsoft, Dell, and Apple (yes, even Apple) insist on mediating the user's experience with their hardware by limiting access (i.e. by limiting choice). This turns out to be not so bad, because the number of things to consider really is staggering. So why was I so happy with my computer? Because I really didn't make many of the decisions - I simply went over to Ars Technica's System Guide and used their recommendations. When it comes time to build my next computer, what do you think I'm going to do? Indeed, Ars is currently compiling recommendations for their October system guide, due out sometime this week. My new computer will most likely be based off of their "Hot Rod" box. (Linux presents some interesting issues in this context as well, though I think I'll save that for another post.)
So what are the lessons here? One of the big ones is to separate the analysis from the choice by getting recommendations from someone else (see the Ars Technica example above). In the market for a digital camera? Call a friend (preferably one who is into photography) and ask them what to get. Another thing that strikes me is that just knowing about this can help you overcome it to a degree. Try to keep your expectations in check, and you might open up some room for pleasant surprises (doing this is suprisingly effective with movies). If possible, try using the product first (borrow a friend's, use a rental, etc...). Don't try to maximize the results so much; settle for things that are good enough (this is what Schwartz calls satisficing).
Without choices, life is miserable. When options are added, welfare is increased. Choice is a good thing. But too much choice causes the curve to level out and eventually start moving in the other direction. It becomes a matter of tradeoffs. Regular readers of this blog know what's coming: We don't so much solve problems as we trade one set of problems for another, in the hopes that the new set of problems is more favorable than the old. So where is the sweet spot? That's probably a topic for another post, but my initial thoughts are that it would depend heavily on what you're doing and the context in which you're doing it. Also, if you were to take a wider view of things, there's something to be said for maximizing options and then narrowing the field (a la the free market). Still, the concept of choice as a double edged sword should not be all that surprising... after all, freedom isn't easy. Just ask Spider Man.
Sunday, October 15, 2006
I've been quite busy lately so once again it's time to unleash the chain-smoking monkey research squad and share the results:
Posted by Mark on October 15, 2006 at 11:09 PM .: link :.
Thursday, September 21, 2006
Gather Intelligence to Be Effective in Interviews, Bounty Hunting
Through following a trail of links long enough that I don't remember where I started, I stumbled upon a post about interviewing. In itself, this is unremarkable. However, at the time, I happened to be watching an episode of Firefly (well, I had it on in the background). Because I am a nerd, I also had the commentary track on, and just as I read about the interviewing anecdote, Joss Whedon (writer/creator of Firefly) began relating something that eerily paralleled the interviewing "secret" in the post referenced above.
The "secret" is to know those who are interviewing you, and tailor your answers to match the type of response the person is looking for. He tells the story of how he interviewed for a principalship at a school in his district, or rather, how a friend helped him prepare:
She drew a rectangle on a piece of paper. “This is the table,” she said. She began to draw small circles around the table — 10 of them. She named each circle. She identified them as the people who would be interviewing me. This was not secret information, this was the panel that every potential principal had to face. The SECRET came next. She pointed to the first circle, “This is John Williams (not his real name). John tends to ask many data related questions. He likes brevity. Keep your answers short to him. Make your point and be quiet.” She pointed to the next circle. “This is Mary Thomas, she’s very child-oriented. She’s very warm and friendly and loves to talk. Answer her questions and orient your answers to how children are affected. Talk a lot with her; elaborate all your points. She’s warm and fuzzy, so use many personal anecdotes.” She continued around the table and when finished, it was like I had the playbook of an opposing football team. I knew the type of questions they would ask. I learned the type of answer each interviewer liked to hear.This is interesting and, naturally, the advice is not limited to interviewing. (Those that have not seen Firefly but want to might want to bug out here, as Spoilers are ahead). Take Jubal Early. He's a bounty hunter, and he's after one of the people on Serenity. To get to her, he has to make sure the rest of the crew does not get in his way. So before he starts, he listens in on some conversations on the ship, gathering intelligence. As Whedon notes in the commentary:
Early has a very specific way of dealing with every character on the ship. He has listened to their conversation, so he understands he knows enough about them. And he understands that when you're with Mal, you have to take him out instantly because Mal is a physical threat that is very real. And then, you know, he closes up Jayne and Zoe and all the threats ... Kaylee is someone he approaches a different way - through a very horrible form of sexual intimidation. ... Later on we'll see him dealing with Book. And we'll see him dealing with Simon. When he deals with Book, again this guy has to be taken out. which gives us a little insight into Book's character. ... And of course, he deals with Simon with logic, because he realizes that the best way to deal with Simon is to use logic because that's the kind of person he is.For those who haven't seen the series, some of this might not make sense, but each approach does fit its target. Mal is the captain and he won't stand for an outsider's shenanigans, especially when that outsider threatens the crew. Jayne and Zoe are also physical threats. Kaylee is like a delightful pixie, which makes Early's approach particularly disturbing. Shepherd Book is a priest, though events like the one in this episode indicate that Book has a less than saintly past. Simon is a doctor, and he's very proper, so a logical approach fits him well.
Again, this advice isn't limited to interviewing and bounty hunting. Knowing who you're dealing with is important, and allows you to orient your responses to their expectations. A little while ago, I was promoted to a management position. One of the interesting changes for me is that I'm dealing with a much wider variety of people, and thus I have to modulate my message depending on who I'm talking to. Of course knowing this and doing this are two different things, and I'm certainly no expert when it comes to this stuff. It comes naturally to some people, but not especially to me.
Anyway, not something I expected to write, but the coincidece struck me...
Sunday, September 17, 2006
A few weeks ago, I wrote about magic and how subconscious problem solving can sometimes seem magical:
When confronted with a particularly daunting problem, I'll work on it very intensely for a while. However, I find that it's best to stop after a bit and let the problem percolate in the back of my mind while I do completely unrelated things. Sometimes, the answer will just come to me, often at the strangest times. Occasionally, this entire process will happen without my intending it, but sometimes I'm deliberately trying to harness this subconscious problem solving ability. And I don't think I'm doing anything special here; I think everyone has these sort of Eureka! moments from time to time. ...And indeed, Jason Kottke recently posted about how design works, referencing a couple of other designers, including Michael Bierut of Design Observer, who describes his process like this:
When I do a design project, I begin by listening carefully to you as you talk about your problem and read whatever background material I can find that relates to the issues you face. If you’re lucky, I have also accidentally acquired some firsthand experience with your situation. Somewhere along the way an idea for the design pops into my head from out of the blue. I can’t really explain that part; it’s like magic. Sometimes it even happens before you have a chance to tell me that much about your problem![emphasis mine] It is like magic, but as Bierut notes, this sort of thing is becoming more important as we move from an industrial economy to an information economy. He references a book about managing artists:
At the outset, the writers acknowledge that the nature of work is changing in the 21st century, characterizing it as "a shift from an industrial economy to an information economy, from physical work to knowledge work." In trying to understand how this new kind of work can be managed, they propose a model based not on industrial production, but on the collaborative arts, specifically theater.This is very interesting and dovetails nicely with several topics covered on this blog. Harnessing self-organizing forces to produce emergent results seems to be rising in importance significantly as we proceed towards an information based economy. As noted, collaboration is key. Older business models seem to focus on a more brute force way of solving problems, but as we proceed we need to find better and faster ways to collaborate. The internet, with it's hyperlinked structure and massive data stores, has been struggling with a data analysis problem since its inception. Only recently have we really begun to figure out ways to harness the collective intelligence of the internet and its users, but even now, we're only scraping the tip of the iceberg. Collaborative projects like Wikipedia or wisdom-of-crowds aggregators like Digg or Reddit represent an interesting step in the right direction. The challenge here is that we're not facing the problems directly anmore. If you want to create a comprehensive encyclopedia, you can hire a bunch of people to research, write, and edit entries. Wikipedia tried something different. They didn't explicitely create an encyclopedia, they created (or, at least, they deployed) a system that made it easy for large amount of people to collaborate on a large amount of topics. The encyclopedia is an emergent result of that collaboration. They sidestepped the problem, and as a result, they have a much larger and dynamic information resource.
None of those examples are perfect, of course, but the more I think about it, the more I think that their imperfection is what makes them work. As noted above, you're probably much better off releasing a site that is imperfect and iterating, making changes and learning from your mistakes as you go. When dealing with these complex problems, you're not going to design the perfect system all at once. I realize that I keep saying we need better information aggregation and analysis tools, and that we have these tools, but they leave something to be desired. The point of these systems, though, is that they get better with time. Many older information analysis systems break when you increase the workload quickly. They don't scale well. These newer systems only really work well once they have high participation rates and large amounts of data.
It remains to be seen whether or not these systems can actually handle that much data (and participation), but like I said, they're a good start and they're getting better with time.
Sunday, September 03, 2006
Does Magic Exist?
I'm back from my trip and it appears that the guest posting has fallen through. So a quick discussion on magic, which was brought up by a friend on a discussion board I frequent. The question: Does magic exist?
I suppose this depends on how you define magic. Arthur C. Clarke once infamously said that "Any sufficiently advanced technology is indistinguishable from magic." And that's probably true, right? If some guy can bend spoons with his thoughts, there's probably a rational explanation for it... we just haven't figured it out yet. Does it count as magic if we don't know how he's doing it? What about when we do figure out how he's doing it? What if it really was some sort of empirically observable telekinesis?
After all, magicians have been performing for hundreds of years, relying on slight of hand and misdirection1 (amongst other tricks of the trade). However, I suspect that's not the type of answer that's being sought.
One thing I think is interesting is the power of thought and how many religious and "magical" traditions were really just ways to harness thought in a productive fashion. For example, crystal balls are often considered to be a magical way to see the future. While not strictly true, it was found that those who look into crystal balls for a long period of time end up entering a sort of trance, similar to hypnosis, and the human mind is able to make certain connections it would not normally make2. Can such a person see the future? I doubt it, but I don't doubt that such people often experience a "revelation" of sorts, even if it is sometimes misguided.
However, you see something similar, though a lot more controlled and a lot less hokey, in a lot of religious traditions. For instance, take Christian Mass and prayer. Mass offers a number of repetitive aspects like singing combined with several chances for reflection and thought. I've always found that going to mass was very helpful in that it put things in a whole new perspective. Superficial things that worried me suddenly seemed less important and much more approachable. Repetitive rituals (like singing in Church) often bring back powerful feelings of the past, etc... further reinforcing the reflection from a different perspective.
Taking it completely out of the spiritual realm, I see very rational people doing the same thing all the time. They just aren't using the same vocabulary. When confronted with a particularly daunting problem, I'll work on it very intensely for a while. However, I find that it's best to stop after a bit and let the problem percolate in the back of my mind while I do completely unrelated things. Sometimes, the answer will just come to me, often at the strangest times. Occasionally, this entire process will happen without my intending it, but sometimes I'm deliberately trying to harness this subconscious problem solving ability. And I don't think I'm doing anything special here; I think everyone has these sort of Eureka! moments from time to time. Once you remove the theology from it, prayer is really a similar process.
Once I noticed this, I began seeing similar patterns throughout my life and even history. For example, Archimedes. He was tasked with determining whether a given substance was gold or not (at the time, this was a true challenge). He toiled and slaved at the problem for weeks, pushing all other aspects of his life away. Finally, his wife, sick of her husband's dirty appearance and bad odor, made him take a bath. As he stepped into the tub, he noticed the water rising and had a revelation... this displacement could be used to accurately measure volume, which could then be used to determine density and ultimately whether or not a substance was gold. The moral of the story: Listen to your wife!3
Have I actually answered the question? Well, I may have veered off track a bit, but I find the process of thinking to be interesting and quite mysterious. After all, whatever it is that's going on in our noggins isn't understood very well. It might just be indistinguishable from magic...
1 - Note to self: go see The Illusionist! Also, The Prestige looks darn good. Why does Hollywood always produce these things in pairs? At least it looks like there's good talent involved in each of these productions...
2 - Oddly enough, I discoved this nugget on another trip through the library stacks while I was supposed to be studying in college. Just thought I should call that out in light of recent posting...
3 - Yes, this is an anecdote from the movie Pi.
Sunday, June 25, 2006
Art for the computer age...
I was originally planning on doing a movie review while our gentle web-master is away, but a topic has come up too many times in the past few weeks for me not to write about it. First it came up in the tag map of Kaedrin, when I noticed that some people were writing pages just to create appealing tag-maps. Then it came up in Illinois and Louisiana. They've passed laws regulating the sale and distribution of "violent games" to minors. This, of course, has led to lawsuits and claims that the law violates free speech. After that, it was the guys at Penny Arcade. They posted links to We Feel Fine and Listening Post.. Those projects search the internet for blogs (maybe this one?) and pull text from them about feelings, and present those feelings to an audience in different ways. Very interesting. Finally, it came up when I opened up the July issue of Game Informer, and read Hideo Kojima's quote:
I believe that games are not art, and will never be art. Let me explain � games will only match their era, meaning what the people of that age want reflects the outcome of the game at that time. So, if you bring a game from 20 years ago out today, no one will say �wow.� There will be some essence where it�s fun, but there won�t be any wows or touching moments. Like a car, for example. If you bring a car from 20 years ago to the modern day, it will be appealing in a classic sense, but how much gasoline it uses, or the lack of air conditioning will simply not be appreciated in that era. So games will always be a kind of mass entertainment form rather than art. Of course, there will be artistic ways of representing games in that era, but it will still be entertainment. However, I believe that games can be a culture that represent their time. If it�s a light era, or a dark era, I always try to implement that era in my works. In the end, when we look back on the projects, we can say �Oh, it was that era.� So overall, when you look back, it becomes a culture.�Every time I reread that quote, I cringe. Here's a man who is one of the most significant forces in video games today, the creator of Metal Gear, and he's saying "No, they're not art, and never will be." I find his distinction between mass entertaintment and art troubling, and his comparison to a car flawed.
It's true that games will always be a reflection of their times- just like anything else is. The limitations of the time and the attitudes of the culture at the time are going to have an effect on everything coming out of that time. A car made in the 60s is going to show the style of the 60s, and is going to have the tech of the 60s. That makes sense. Of course, a painting made in the 1700s is going to show the limits and is going to reflect the feelings of that time, too. The paints, brushes, and canvas used then aren't necessarily going to be the same as the ones used now, especially with the popular use of computers in painting. The fact that something is a reflection of the times isn't going to stop people from appreciating the artistic worth of that thing. The fact that the Egyptians hadn't mastered perspective doesn't stop anyone from wanting to see their statues.
What does that really tell us, though? Nothing. A car from the 80s may not be appreciated as much as a new model car as a means of transport, but Kojima seems to be completely forgetting that there are many cars that are appreciated as special. Nobody buys a 60s era muscle car because they think it's a good car for driving around in- they buy it because they think it's special, because some people view older cars as collectable. Some people do see them as more than a mere means of transportation. People are very much "wowed" by old cars. Is there any reason why this can't be true of games?
I am 8 Bit seems to suggest that there are people who are still wowed by those games. Kojima may be partially correct, though. Maybe most of those early games won't hold up in the long run. That shouldn't be a surprise. They're the first generation of games. The 8-Bit era was the begining of the new wave of games, though. For the first time, creators could start to tell real stories, beyond simple high-score pursuit. Game makers were just getting their wings, and starting to see what games were really capable of. Maybe early games aren't art. Does that mean that games aren't art?
The problem mostly seems to be that we're asking the wrong questions. We shouldn't be asking "are video games art" any more than we'd ask "are movies art." It's a loaded question and you'll never come to any real answer, because the answer is going to depend completely on what movie you're looking at, and who you're asking. The same holds true with games. The question shouldn't be whether all games are art, but whether a particular game has some artistic merrit. How we decide what counts as art is constantly up for debate, but there are games that raise such significant moral or philosophical questions, or have such an amazing sense of style, or tell such an amazing story, that it seems hard to argue that they have no artistic merrit.
All of this really is leading somewhere. Computers have changed everything. I know that seems obvious, but I think it's taking some people- people like Kojima- a little longer to realize it. Computers have opened up a level of interactivity and access to information that we've never really had before. I can update Kaedrin from Michigan, and can send a message to a friend in Germany, all while buying videos from Japan and playing chess with a man in Alaska (not that I'm actually doing those things... but I could). These changes are going to be reflected in the art our culture produces. There's going to be backlash and criticism, and we're going to find that some people just don't "get it" or don't want to. We've gone through the same thing countless times before. Nobody thought movies would be seen as art when they came on the scene, and they were sure that the talkies wouldn't. When Andy Warhol came out, there were plenty of nay-sayers. Soup cans? As art? Computers have generally been accepted as a tool for making art, but I think we're still seeing the limits pushed. We've barely scratched the surface. The interaction between art, artist, and viewer is blurring, and I, for one, can't wait to see what happens.
Sunday, June 18, 2006
David Wong's article on the coming video game crash seems to have inspired Steven Den Beste, who agrees with Wong that there will be a gaming crash and also thinks that the same problems affect other forms of entertainment. The crux of the problem appears to be novelty. Part of the problem appears to be evolutionary as well. As humans, we are conditioned for certain things, and it seems that two of our insticts are conflicting.
The first instinct is the human tendency to rely on induction. Correlation does not imply causation, but most of the time, we act like it does. We develop a complex set of heuristics and guidelines that we have extrapolated from past experiences. We do so because circumstances require us to make all sorts of decisions without posessing the knowledge or understanding necessary to provide a correct answer. Induction allows us to to operate in situations which we do not uderstand. Psychologist B. F. Skinner famously explored and exploited this trait in his experiments. Den Beste notes this in his post:
What you do is to reward the animal (usually by giving it a small amount of food) for progressively behaving in ways which is closer to what you want. The reason Skinner studied it was because he (correctly) thought he was empirically studying the way that higher thought in animals worked. Basically, they're wired to believe that "correlation often implies causation". Which is true, by the way. So when an animal does something and gets a reward it likes (e.g. food) it will try it again, and maybe try it a little bit differently just to see if that might increase the chance or quantity of the reward.So we're hard wired to create these heuristics. This has many implications, from Cargo Cults to Superstition and Security Beliefs.
The second instinct is the human drive to seek novelty, also noted by Den Beste:
The problem is that humans are wired to seek novelty. I think it's a result of our dietary needs. Lions can eat zebra meat exclusively their entire lives without trouble; zebras can eat grass exclusively their entire lives. They don't need novelty, but we do. Primates require a quite varied diet in order to stay healthy, and if we eat the same thing meal after meal we'll get sick. Individuals who became restless and bored with such a diet, and who sought out other things to eat, were more likely to survive. And when you found something new, you were probably deficient in something that it provided nutritionally, so it made sense to like it for a while -- until boredom set in, and you again sought out something new.The drive for diversity affects more than just our diet. Genetic diversity has been shown to impart broader immunity to disease. Children from diverse parentage tend to develop a blend of each parent's defenses (this has other implications, particularly for the tendency for human beings to work together in groups). The biological benefits of diversity are not limited to humans either. Hybrid strains of many crops have been developed over the years because by selectively mixing the best crops to replant the next year, farmers were promoting the best qualities in the species. The simple act of crossing different strains resulted in higher yields and stronger plants.
The problem here is that evolution has made the biological need for diversity and novelty dependent on our inductive reasoning instincts. As such, what we find is that those we rely upon for new entertainment, like Hollywood or the video game industry, are constantly trying to find a simple formula for a big hit.
It's hard to come up with something completely new. It's scary to even make the attempt. If you get it wrong you can flush amazingly large amounts of money down the drain. It's a long-shot gamble. Every once in a while something new comes along, when someone takes that risk, and the audience gets interested...Indeed, the majority of big films made today appear to be remakes, sequels or adaptations. One interesting thing I've noticed is that something new and exciting often fails at the box office. Such films usually gain a following on video or television though. Sometimes this is difficult to believe. For instance, The Shawshank Redemption is a very popular film. In fact, it occupies the #2 spot (just behind The Godfather) on IMDB's top rated films. And yet, the film only made $28 million dollars (ranked 52 in 1994) in theaters. To be sure, that's not a modest chunk of change, but given the universal love for this film, you'd expect that number to be much higher. I think part of the reason this movie failed at the box office was that marketers are just as susceptible to these novelty problems as everyone else. I mean, how do you market a period prison drama that has an awkward title an no big stars? It doesn't sound like a movie that would be popular, even though everyone seems to love it.
Which brings up another point. Not only is it difficult to create novelty, it can also be difficult to find novelty. This is the crux of the problem: we require novelty, but we're programmed to seek out new things via correllation. There is no place to go for perfect recommendations and novelty for the sake of novelty isn't necessarily enjoyable. I can seek out some bizarre musical style and listen to it, but the simple fact that it is novel does not guarantee that it will be enjoyable. I can't rely upon how a film is marketed because that is often misleading or, at least, not really representative of the movie (or whatever). Once we do find something we like, our instinct is often to exhaust that author or director or artist's catalog. Usually, by the end of that process, the artist's work begins to seem a little stale, for obvious reasons.
Seeking out something that is both novel and enjoyable is more difficult than it sounds. It can even be a little scary. Many times, things we think will be new actually turn out to be retreads. Other times, something may actually be novel, but unenjoyable. This leads to another phenomenon that Den Beste mentions: the "Unwatched pile." Den Beste is talking about Anime, and at this point, he's begun to accumulate a bunch of anime DVDs which he's bought but never watched. I've had similar things happen with books and movies. In fact, I have several books on my shelf, just waiting to be read, but for some of them, I'm not sure I'm willing to put in the time and effort to read them. Why? Because, for whatever reason, I've begun to experience some set of diminishing returns when it comes to certain types of books. These are similar to other books I've read, and thus I probably won't enjoy these as much (even if they are good books).
The problem is that we know something novel is out there, it's just a matter of finding it. At this point, I've gotten sick of most of the mass consumption entertainment, and have moved on to more niche forms of entertainment. This is really a signal versus noise, traversal of the long tail problem. An analysis problem. What's more, with globalization and the internet, the world is getting smaller... access to new forms of entertainment are popping up (for example, here in the US, anime was around 20 years ago, but it was nowhere near as common as it is today). This is essentially a subset of a larger information aggregation and analysis problem that we're facing. We're adrift in a sea of information, and must find better ways to navigate.
Sunday, June 11, 2006
Time is short this week, so just a few links I found interesting...
Sunday, April 16, 2006
Shamus stumbled upon an interesting meme (at Tim Worstall's blog) relying upon Wikipedia's ridiculously comprehensive date pages:
Go to Wikipedia and look up your birth day (excluding the year). List three neat facts, two births and one death in your blog, including the year.Like Shamus, I won't limit myself to the numbers above and will instead just list some things I think are interesting about September 13...
Sunday, April 09, 2006
Philadelphia Film Festival: Adult Swim 4 Your Lives
Well. That was interesting. Hosted by Dana Snyder (voice of Master Shake from Aqua Teen Hunger Force) and featuring a veritable plethora of other Adult Swim creators, Adult Swim 4 Your Lives was a show that defies any legitimate explanation. As such, I will simply list out some highlights, as well as some words that I would use to describe the night:
Update 4.15.06: I've created a category for all posts from the Philadelphia Film Festival.
Sunday, March 26, 2006
Introverts and a Curious Guy
Time is short this week, so here's a few interesting links:
Wednesday, January 18, 2006
On Sunday, I wrote about cheating in probabilistic systems, but one thing I left out was that these systems are actually neutral systems. A while ago, John Robb (quoting the Nicholas Carr post I referenced) put it well:
Like all advances in technology, the progress of self-organizing systems and emergent results can be used for good or for ill. In the infamous words of Buckethead:To people, "optimization" is a neutral term. The optimization of a complex mathematical, or economic, system may make things better for us, or it may make things worse. It may improve society, or degrade it. We may not be able to apprehend the ends, but that doesn't mean the ends are going to be good.He's exactly right. Evolution and emergent intelligence doesn't naturally flow towards some eschatological goodness. It moves forward under its own logic. It often solves problems we don't want solved. For example, in global guerrilla open source warfare, this emergent community intelligence is slowly developing forms of attack (such as systems disruption), that make it an extremely effective foe for nation-states.
Like the atom, the flyswatter can be a force for great good or great evil.Indeed.
Tuesday, January 17, 2006
Happy Birthday, Ben
Today is Ben Franklin's 300th birthday. In keeping with the theme of tradeoffs and compromise that often adorns this blog, and since Franklin himself has also been a common subject, here is a quote from Franklin's closing address to the Constitutional Convention in Philadelphia:
I confess that I do not entirely approve this Constitution at present; but sir, I am not sure I shall ever approve it: For, having lived long, I have experienced many instances of being obliged, by better information or fuller consideration, to change opinions even on important subjects, which I once thought right, but found to be otherwise. It is therefore that, the older I grow, the more apt I am to doubt my own judgment of others.There are some people today (and even in Franklin's time) who seem to think of compromise as some sort of fundamental evil, but it appears to me to be an essential part of democracy.
Update 1.18.06: Mister Snitch points to The Benjamin Franklin Tercentenary, an excellently designed site dedicated to Franklin's 300th birthday...
Thursday, January 05, 2006
On the lighter side
You may be familiar with my long-winded, more serious style, but I thought this blond joke would be a welcome change of pace. Best. Joke. Evar. [via Chizumatic, whose lack of permalinks add extra irony]
Saturday, December 24, 2005
Fry: "There's supposed to be some kind of, you know, pine tree."In anticipation of the eventual extinction of Pine Trees, here's the traditional Kaedrin Christmas Cactus:
"Happy Christmas to all, and to all a good-night." (sound clip via Can't Get Enough Futurama)
Also regarding Christmas Trees, check out a post from a few years ago: Is the Christmas Tree Christian?
Sunday, November 27, 2005
Hurricane Names, Restaurant Critics, and more...
Time is short this week, so here's a few links:
Sunday, October 09, 2005
Not much time this week, so here are some interesting links:
Sunday, October 02, 2005
Recent events have placed me in a position where I will be interviewing people for open positions on my team. Not having experience with such a thing, my first reaction was to set the monkey research squad loose on the subject. As usual, they didn't disappoint.
Sunday, September 04, 2005
The Pendulum Swings
I've often commented that human beings don't so much solve problems as they trade one set problems for another (in the hope that the new set of problems are more favorable than the old). Yet that process doesn't always follow a linear trajectory. Initial reactions to a problem often cause problems of their own. Reactions to those problems often take the form of an over-correction. And so it continues, like the swinging of a pendulum, back and forth, until it reaches it's final equilibrium.
This is, of course, nothing new. Hegel's philosophy of argument works in exactly that way. You start with a thesis, some sort of claim that becomes generally accepted. Then comes the antithesis, as people begin to find holes in the original thesis and develop an alternative. For a time, the thesis and antithesis vie to establish dominance, but neither really wins. In the end, a synthesis comprised of the best characteristics of the thesis and antithesis emerges.
Naturally, it's rarely so cut and dry, and the process continues as the synthesis eventually takes on the role of the thesis, with new antitheses arising to challenge it. It works like a pendulum, oscillating back and forth until it reaches a stable position (a new synthesis). There are some interesting characteristics of pendulums that are also worth noting in this context. Steven Den Beste once described the two stable states of the pendulum: one in which the weight hangs directly below the hinge, and one in which the weight is balanced directly above the hinge.
On the left, the weight hangs directly below the hinge. On the right, it's balanced directly above it. Both states are stable. But if you slightly perturb the weight, they don't react the same way. When the left weight is moved off to the side, the force of gravity tries to center it again. In practice, if the hinge has a good bearing, the system then will oscillate around the base state and eventually stop back where it started. But if the right weight is perturbed, then gravity pulls the weight away and the right system will fail and convert to the left one.Not all systems are robust, but it's worth noting that even robust systems are not immune to perturbation. The point isn't that they can't fail, it's that when they do fail, they fail gracefully. Den Beste applies the concept to all sorts of things, including governments and economic systems, and I think the analogy is apt. In the coming months and years, we're going to see a lot of responses to the tragedy of hurricane Katrina. Katrina represents a massive perturbation; it's set the pendulum swinging, and it'll be a while before it reaches it's resting place. There will be many new policies that will result. Some of them will be good, some will be bad, and some will set new cycles into action. Disaster preparedness will become more prevalent as time goes on, and the plans will get better too. But not all at once, because we don't so much solve problems as trade one set of disadvantages for another, in the hopes that we can get that pendulum to rest in it's stable state.
Glenn Reynolds has collected a ton of worthy places to donate for hurricane relief here. It's also worth noting that many employers are matching donations to the Red Cross (mine is), so you might want to go that route if it's available...
Sunday, August 21, 2005
I'm currently reading Vernor Vinge's A Deepness in the Sky. It's an interesting novel, and there are elements of the story that resemble Vinge's singularity. (Potential spoilers ahead) The story concerns two competing civilizations that travel to an alien planet. Naturally, there are confrontations and betrayals, and we learn that one of the civilizations utilizes a process to "Focus" an individual on a single area of study, essentially turning them into a brilliant machine. Naturally, there is a lot of debate about the Focused, and in doing so, one of the characters describes it like this:
... you know about really creative people, the artists who end up in your history books? As often as not, they're some poor dweeb who doesn't have a life. He or she is just totally fixated on learning everything about some single topic. A sane person couldn't justify losing friends and family to concentrate so hard. Of course, the payoff is that the dweeb may find things or make things that are totally unexpected. See, in that way, a little of Focus has always been part of the human race. We Emergents have simply institutionalized this sacrifice so the whole community can benefit in a concentrated, organized way.Debate revolves around this concept because people living in this Focused state could essentially be seen as slaves. However, the quote above reminded me of a post I wrote a while ago called Mastery:
There is an old saying "Jack of all trades, Master of none." This is indeed true, though with the demands of modern life, we are all expected to live in a constant state of partial attention and must resort to drastic measures like Self-Censorship or information filtering to deal with it all. This leads to an interesting corollary for the Master of a trade: They don't know how to do anything else!In that post, I quoted Isaac Asimov, who laments that he's clueless when it comes to cars, and relates a funny story about what happened when he once got a flat tire. I wondered if that sort of mastery was really a worthwhile goal, but the artificually induced Focus in Vinge's novel opens the floor up to several questions. Would you volunteer to be focused in a specific area of study, knowing that you would basically do that and only that? No family, no friends, but only because you are so focused on your studies (as portrayed in the novel, doing work in your field is what makes you happy). What if you could opt to be focused for a limited period of time?
There are a ton of moral and ethical questions about the practice, and as portrayed in the book, it's not a perfect process and may not be reversible (at least, not without damage). The rewards would be great - Focusing sounds like a truly astounding feat. But would it really be worth it? As portrayed in the book, it definitely would not, as those wielding the power aren't very pleasant. Because the Focused are so busy concentrating on their area of study, they become completely dependent on the non-Focused to guide them (it's possible for a Focused person to become too-obsessed with a problem, to the point where physical harm or even death can occur) and do everything else for them (i.e. feed them, clean them, etc...) Again, in the book, those who are guiding the Focused are ruthless exploiters. However, if you had a non-Focused guide who you trusted, would you consider it?
I still don't know that I would. While the results would surely be high quality, the potential for abuse is astounding, even when it's someone you trust that is pulling the strings. Nothing says they'll stay trustworthy, and it's quite possible that they could be replaced in some way by someone less trustworthy. If the process was softened to the point where the Focused retains at least some control over their focus (including the ability to go in and out), then this would probably be a more viable option. Fortunately, I don't see this sort of thing happening in the way proposed by the book, but other scenarios present interesting dilemmas as well...
Sunday, June 19, 2005
Neal Stephenson's take on Star Wars: Episode III - Revenge of the Sith in the New York times is interesting on a few levels. He makes some common observations, such as the prevalence of geeky details in supplementary material of the Star Wars universe (such as the Clone Wars cartoons or books), but the real gem is his explanation for why the geeky stuff is mostly absent from the film:
Modern English has given us two terms we need to explain this phenomenon: "geeking out" and "vegging out." To geek out on something means to immerse yourself in its details to an extent that is distinctly abnormal - and to have a good time doing it. To veg out, by contrast, means to enter a passive state and allow sounds and images to wash over you without troubling yourself too much about what it all means.Stephenson says the original Star Wars is a mixture of veg and geek scenes, while the new movies are almost all veg out material. The passive vegging out he describes is exactly how I think of the prequels (except that Episode III seems to have a couple of non-veg out scenes, which is one of the reasons I think it fares better than the other prequels). He also makes a nice comparison to the business world, but then takes a sudden sort of indirect dive towards outsourcing and pessimism at the end of the article, making a vague reference to going "the way of the old Republic."
I'm not sure I agree with those last few paragraphs. I see the point, but it's presented as a given. Many have noted Stephenson could use a good editor for his recent novels, and it looks to me like Stephenson was either intentionally trying to keep it short (it's only two pages - not what you'd expect from someone who routinely writes 900 page books, including three that are essentially a single 2700 page novel) or his article was edited down to fit somewhere. In either case, I'm sure he could have expounded upon those last paragraphs to the tune of a few thousand words, but that's what I like about the guy. Not that the article is bad, but I prefer Stephenson's longwinded style. Ironically, Stephenson has left the details out of his article; it reads more like a power-point presentation that summarizes the bullet points of his argument than the sort of in-depth analysis I'm used to from Stephenson. As such, I'm sure there are a lot of people who would take issue with some of his premises. Perhaps it's an intentional irony, or (more likely) I'm reading too much into it.
Posted by Mark on June 19, 2005 at 10:19 AM .: link :.
Sunday, June 05, 2005
Time is short this week, so I'll just have to rely on my army of chain smoking monkey researchers for a few links:
Update: Added another link and some text...
Posted by Mark on June 05, 2005 at 09:57 PM .: link :.
Sunday, May 29, 2005
Sharks, Deer, and Risk
Here's a question: Which animal poses the greater risk to the average person, a deer or a shark?
Most people's initial reaction (mine included) to that question is to answer that the shark is the more dangerous animal. Statistically speaking, the average American is much more likely to be killed by deer (due to collisions with vehicles) than by a shark attack. Truly accurate statistics for deer collisions don't exist, but estimates place the number of accidents in the hundreds of thousands. Millions of dollars worth of damage are caused by deer accidents, as are thousands of injuries and hundreds of deaths, every year.
Shark attacks, on the other hand, are much less frequent. Each year, approximately 50 to 100 shark attacks are reported. "World-wide, over the past decade, there have been an average of 8 shark attack fatalities per year."
It seems clear that deer actually pose a greater risk to the average person than sharks. So why do people think the reverse is true? There are a number of reasons, among them the fact that deer don't intentionally cause death and destruction (not that we know of anyway) and they are also usually harmed or killed in the process, while sharks directly attack their victims in a seemingly malicious manner (though I don't believe sharks to be malicious either).
I've been reading Bruce Schneier's book, Beyond Fear, recently. It's excellent, and at one point he draws a distinction between what security professionals refer to as "threats" and "risks."
A threat is a potential way an attacker can attack a system. Car burglary, car theft, and carjacking are all threats ... When security professionals talk abour risk, they take into consideration both the likelihood of the threat and the seriousness of a successful attack. In the U.S., car theft is a more serious risk than carjacking because it is much more likely to occur.Everyone makes risk assessments every day, but most everyone also has different tolerances for risk. It's essentially a subjective decision, and it turns out that most of us rely on imperfect heuristics and inductive reasoning when it comes to these sorts of decisions (because it's not like we have the statistics handy). Most of the time, these heuristics serve us well (and it's a good thing too), but what this really ends up meaning is that when people make a risk assessment, they're basing their decision on a perceived risk, not the actual risk.
Schneier includes a few interesting theories about why people's perceptions get skewed, including this:
Modern mass media, specifically movies and TV news, has degraded our sense of natural risk. We learn about risks, or we think we are learning, not by directly experiencing the world around us and by seeing what happens to others, but increasingly by getting our view of things through the distorted lens of the media. Our experience is distilled for us, and it’s a skewed sample that plays havoc with our perceptions. Kids try stunts they’ve seen performed by professional stuntmen on TV, never recognizing the precautions the pros take. The five o’clock news doesn’t truly reflect the world we live in -- only a very few small and special parts of it.When I first considered the Deer/Shark dilemma, my immediate thoughts turned to film. This may be a reflection on how much movies play a part in my life, but I suspect some others would also immediately think of Bambi, with it's cuddly cute and innocent deer, and Jaws, with it's maniacal great white shark. Indeed, Fritz Schranck once wrote about these "rats with antlers" (as some folks refer to deer) and how "Disney's ability to make certain animals look just too cute to kill" has deterred many people from hunting and eating deer. When you look at the deer collision statistics, what you see is that what Disney has really done is to endanger us all!
Given the above, one might be tempted to pursue some form of censorship to keep the media from degrading our ability to determine risk. However, I would argue that this is wrong. Freedom of speech is ultimately a security measure, and if we're to consider abridging that freedom, we must also seriously consider the risks of that action. We might be able to slightly improve our risk decisionmaking with censorship, but at what cost?
Schneier himself recently wrote about this subject on his blog. In response to an article which argues that suicide bombings in Iraq shouldn't be reported (because it scares people and it serves the terrorists' ends). It turns out, there are a lot of reasons why the media's focus on horrific events in Iraq cause problems, but almost any way you slice it, it's still wrong to censor the news:
It's wrong because the danger of not reporting terrorist attacks is greater than the risk of continuing to report them. Freedom of the press is a security measure. The only tool we have to keep government honest is public disclosure. Once we start hiding pieces of reality from the public -- either through legal censorship or self-imposed "restraint" -- we end up with a government that acts based on secrets. We end up with some sort of system that decides what the public should or should not know.Like all of security, this comes down to a basic tradeoff. As I'm fond of saying, human beings don't so much solve problems as they do trade one set of problems for another (in the hopes that the new problems are preferable the old). Risk can be difficult to determine, and the media's sensationalism doesn't help, but censorship isn't a realistic solution to that problem because it introduces problems of its own (and those new problems are worse than the one we're trying to solve in the first place). Plus, both Jaws and Bambi really are great movies!
Posted by Mark on May 29, 2005 at 08:50 PM .: link :.
Sunday, May 22, 2005
Voters and Lurkers
Debating online, whether it be through message boards or blogs or any other method, can be rewarding, but it can also be quite frustrating. When most people think of a debate, they think of a group arguing an opponent, and one of the two factions "winning" the argument. It's a process of expression in which different people with different points of view will express their opinions, and are criticised by one another.
I've often found that specific threads tend to boil down to a point where the argument is going back and forth between two sole debaters (with very few interruptions from others). Inevitably, the debate gets to the point where both sides' assumptions (or axioms) have been exposed, and neither side is willing to agree with the other. To the debaters, this can be intensely frustrating. As such, anyone who has spent a significant amount of time debating others online can usually see that they're probably never going to convince their opponents. So who wins the argument?
The debaters can't decide who wins - they obviously think their argument is better than their opponents (or, at the very least, are unwilling to admit it) and so everyone thinks that they "won." But the debaters themselves don't "win" an argument, it's the people witnessing the debate that are the real winners. They decide which arguments are persuasive and which are not.
This is what the First Amendment of the US Constitution is based on, and it is a fundamental part of our democracy. In a vigorous marketplace of ideas, the majority of voters will discern the truth and vote accordingly.
Unfortunately, there never seems to be any sort of closure when debating online, because the audience is primarily comprised of lurkers, most of whom don't say anything (plus, there are no votes), and so it seems like nothing is accomplished. However, I assure you that is not the case. Perhaps not for all lurkers, but for a lot of them, they are reading the posts with a critical eye and coming out of the debate convinced one way or the other. They are the "voters" in an online debate. They are the ones who determine who won the debate. In a scenario where only 10-15 people are reading a given thread, this might not seem like much (and it's not), but if enough of these threads occur, then you really can see results...
I'm reminded of Benjamin Franklin's essay "An apology for printers," in which Franklin defended those who printed allegedly offensive opinion pieces. His thought was that very little would be printed if publishers only produced things that were not offensive to anybody.
Printers are educated in the Belief, that when Men differ in Opinion, both sides ought equally to have the Advantage of being heard by the Public; and that when Truth and Error have fair Play, the former is always an overmatch for the latter.
Posted by Mark on May 22, 2005 at 06:58 PM .: link :.
Sunday, May 08, 2005
It's back! Last week was the first new episode, and things appear to be going well. I remember watching the reruns on the Cartoon Network and cursing FOX for cancelling it. How could they do such a thing?
I have this theory about Family Guy. You see, it's almost too funny. It makes you laugh so much that you forget what was so funny in the first place. And because many of the funny bits are almost completely unrelated to the story (inasmuch as there is a story), it's not like you can remember much by figuring it out from the plot. So all anyone remembers about Family Guy is that it's funny. This apparent amnesia includes the airing date, which during the initial run of Family Guy was all over the place (Sunday, Thursday, Tuesday?). Upon repeated viewings, it becomes easier. Or I'm just a moron who can't remember stuff when he laughs.
American Dad has been less impressive, I think perhaps because it mostly eschews the cutscene/flashback formula of Family Guy. However, I'm an optimist, so I'm willing to give them a chance to flesh it out a bit. I don't think it's as bad as Jeremy Bowers does, but I share his apprehension about Seth McFarlane spreading himself too thin:
I remember when Scott Adams, the author of Dilbert, spread himself too thin with the cartoon and the TV show. I don't have a reference for the quality of the cartoon show without the cartoon, but during the run of the TV show, the quality of the cartoon really took a nose-dive. Most Dilbert daily cartoons before the TV show had effectively two punchlines in the final panel, something that once I noticed really made me respect him, given the constraints of the medium. Other cartoons certainly do it when they can, but Scott Adams pulled it off routinely after his first few years. As he worked on the TV show, the punchline count dropped to an average of one, and it was usually of a lower quality to boot. Now that he's back to just working on the strip, its quality has increased again ...My thought is that McFarlane does indeed drive the whole show (though I'm not sure about American Dad), but I am again optimistic, for some unspecified reason.
Posted by Mark on May 08, 2005 at 09:59 PM .: link :.
Sunday, March 27, 2005
Slashdot links to a fascinating and thought provoking one hour (!) audio stream of a speech "by futurist and developmental systems theorist, John Smart." The talk is essentially about the future of technology, more specifically information and communication technology. Obviously, there is a lot of speculation here, but it is interesting so long as you keep it in the "speculation" realm. Much of this is simply a high-level summary of the talk with a little commentary sprinkled in.
He starts by laying out some key motivations or guidelines to thinking about this sort of thing, and he paraphrases David Brin (and this is actually paraphrasing Smart):
We need a pragmatic optimism, a can-do attitude, a balance between innovation and preservation, honest dialogue on persistent problems, ... tolerance of the imperfect solutions we have today, and the ability to avoid both doomsaying and a paralyzing adherence to the status quo. ... Great input leads to great output.So how do new systems supplant the old? They do useful things with less matter, less energy, and less space. They do this until they reach some sort of limit along those axes (a limitation of matter, energy, or space). It turns out that evolutionary processes are great at this sort of thing.
Smart goes on to list three laws of information and communication technology:
This about halfway through the speech, and he goes on to list many examples and he explores some more interesting concepts. Here are some bits I found interesting.
Posted by Mark on March 27, 2005 at 08:40 PM .: link :.
Sunday, February 20, 2005
The Stability of Three
One of the things I've always respected about Neal Stephenson is his attitude (or rather, the lack thereof) regarding politics:
Politics - These I avoid for the simple reason that artists often make fools of themselves, and begin to produce bad art, when they decide to get political. A novelist needs to be able to see the world through the eyes of just about anyone, including people who have this or that set of views on religion, politics, etc. By espousing one strong political view a novelist loses the power to do this. Anyone who has convinced himself, based on reading my work, that I hold this or that political view, is probably wrong. What is much more likely is that, for a while, I managed to get inside the head of a fictional character who held that view.Having read and enjoyed several of his books, I think this attitude has served him well. In a recent interview in Reason magazine, Stephenson makes several interesting observations. The whole thing is great, and many people are interested in his comments regarding an American technology and science, but I found one other tidbit very interesting. Strictly speaking, it doesn't break with his attitude about politics, but it is somewhat political:
Speaking as an observer who has many friends with libertarian instincts, I would point out that terrorism is a much more formidable opponent of political liberty than government. Government acts almost as a recruiting station for libertarians. Anyone who pays taxes or has to fill out government paperwork develops libertarian impulses almost as a knee-jerk reaction. But terrorism acts as a recruiting station for statists. So it looks to me as though we are headed for a triangular system in which libertarians and statists and terrorists interact with each other in a way that I’m afraid might turn out to be quite stable.I took particular note of what he describes as a "triangular system" because it's something I've seen before...
One of the primary goals of the American Constitutional Convention was to devise a system that would be resistant to tyranny. The founders were clearly aware of the damage that an unrestrained government could do, so they tried to design the new system in such a way that it wouldn't become tyrannical. Democratic institions like mandatory periodic voting and direct accountability to the people played a large part in this, but the founders also did some interesting structural work as well.
Taking their cue from the English Parliament's relationship with the King of England, the founders decided to create a legislative branch separate from the executive. This, in turn, placed the two governing bodies in competition. However, this isn't a very robust system. If one of the governing bodies becomes more powerful than the other, they can leverage their advantage to accrue more power, thus increasing the imbalance.
A two-way balance of power is unstable, but a three-way balance turns out to be very stable. If any one body becomes more powerful than the other two, the two usually can and will temporarily unite, and their combined power will still exceed the third. So the founders added a third governing body, an independent judiciary.
The result was a bizarre sort of stable oscillation of power between the three major branches of the federal government. Major shifts in power (such as wars) disturbed the system, but it always fell back to a preferred state of flux. This stable oscillation turns out to be one of the key elements of Chaos theory, and is referred to as a strange attractor. These "triangular systems" are particularly good at this, and there are many other examples...
Some argue that the Cold War stabilized considerably when China split from the Soviet Union. Once it became a three-way conflict, there was much less of a chance of unbalance (and as unbalance would have lead to nuclear war, this was obviously a good thing).
Steven Den Beste once noted this stabilizing power of three in the interim Iraqi constitution, where the Iraqis instituted a Presidency Council of 3 Presidents representing each of the 3 major factions in Iraq:
...those writing the Iraqi constitution also had to create a system acceptable to the three primary factions inside of Iraq. If they did not, the system would shake itself to pieces and there was a risk of Iraqi civil war.It should be interesting to see if that structure will be maintained in the new Iraqi constitution.
As for Stephenson's speculation that a triangular system consisting of libertarians, statists, and terrorists may develop, I'm not sure. They certainly seem to feed off one another in a way that would facilitate such a system, but I'm not positive it would work out that way, nor do I think it is particularly a desirable state to be in, all the more because it could be a very stable system due to its triangular structure. In any case, I thought it was an interesting observation and well worth considering...
Posted by Mark on February 20, 2005 at 08:06 PM .: link :.
Sunday, February 06, 2005
Time is tight this week, so just a few quick quotes from Neal Stephenson's Cryptonomicon which struck me during a recent re-reading. The first is essentially a summary of evolution:
Let's set the existence-of-God issue aside for a later volume, and just stipulate that in some way, self-replicating organisms came into existence on this planet and immediately began trying to get rid of each other, either by spamming their environments with rough copies of themselves, or by more direct means which hardly need to be belabored. Most of them failed, and their genetic legacy was erased from the universe forever, but a few found some way to survive and to propagate. After about three billion years of this sometimes zany, frequently tedious fugue of carnality and carnage, Godfrey Waterhouse IV was born, in Murdo, South Dakota, to Blanche, the wife of a Congregational preacher named Bunyan Waterhouse. Like every other creature on the face of the earth, Godfrey was, by birthright, a stupendous badass, albeit in the somewhat narrow technical sense that he could trace his ancestry back up a long line of slightly less highly evolved stupendous badasses to that first self-replicating gizmo - which, given the number and variety of its descendants, might justifiably be described as the most stupendous badass of all time. Everyone and everything that wasn't a stupendous badass was dead. As nightmarishly lethal, memetically programmed death-machines went, these were the nicest you could ever hope to meet.And the next quote comes from the perspective of Goto Dengo, a Japanese soldier during World War II:
The Americans have invented a totally new bombing tactic in the middle of a war and implemented it flawlessly. His mind staggers like a drunk in the aisle of a careening train. They saw that they were wrong, they admitted their mistake, they came up with a new idea. The new idea was accepted and embraced all the way up the chain of command. Now they are using it to kill their enemies.Most of you reading this know that the officers who displayed some adaptability (to borrow another phrase from Stephenson) didn't kill themselves, nor were they thrown into prison. They were most likely applauded for their efforts. But Goto Dengo, and the Japanese at the time, embraced a warrior culture where such actions were deeply dishonorable.
It's interesting to consider the second quote in light of the first. In a sense, a war is an implementation of what Stephenson describes as self-replicating organisms "trying to get rid of each other." So the question is what part do honor and flexibility play in the grand evolutionary scheme of things?
Posted by Mark on February 06, 2005 at 11:45 PM .: link :.
Sunday, January 16, 2005
Chasing the Tail
The Long Tail by Chris Anderson : An excellent article from Wired that demonstrates a few of the concepts and ideas I've been writing about recently. One such concept is well described by Clay Shirky's excellent article Power Laws, Weblogs, and Inequality. A system governed by a power law distribution is essentially one where the power (whether it be measured in wealth, links, etc) is concentrated in a small population (when graphed, the rest of the population's power values resemble a long tail). This concentration occurs spontaneously, and it is often strengthened because members of the system have an incentive to leverage their power to accrue more power.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.As such, this distribution manifests in all sorts of human endeavors, including economics (for the accumulation of wealth), language (for word frequency), weblogs (for traffic or number of inbound links), genetics (for gene expression), and, as discussed in the Wired article, entertainment media sales. Typically, the sales of music, movies, and books follow a power law distribution, with a small number of hit artists who garner the grand majority of the sales. The typical rule of thumb is that 20% of available artists get 80% of the sales.
Because of the expense of producing the physical product, and giving it a physical point of sale (shelf-space, movie theaters, etc...), this is bad news for the 80% of artists who get 20% of the sales. Their books, movies, and music eventually go out of print and are generally forgotten, while the successful artists' works are continually reprinted and sold, building on their own success.
However, with the advent of the internet, this is beginning to change. Sales are still governed by the power law distribution, but the internet is removing the physical limitations of entertainment media.
An average movie theater will not show a film unless it can attract at least 1,500 people over a two-week run; that's essentially the rent for a screen. An average record store needs to sell at least two copies of a CD per year to make it worth carrying; that's the rent for a half inch of shelf space. And so on for DVD rental shops, videogame stores, booksellers, and newsstands.The decentralized nature of the internet makes it a much better way to distribute entertainment media, as that documentary that has a potential national (heck, worldwide) audience of half a million people could likely succeed if distributed online. The infrastructure for films isn't there yet, but it has been happening more in the digital music world, and even in a hybrid space like Amazon.com, which sells physical products, but in a non-local manner. With digital media, the cost of producing and distributing entertainment media goes way down, and thus even average artists can be considered successful, even if their sales don't approach that of the biggest sellers.
The internet isn't a broadcast medium; it is on-demand, driven by each individual's personal needs. Diversity is the key, and as Shirkey's article says: "Diversity plus freedom of choice creates inequality, and the greater the diversity, the more extreme the inequality." With respect to weblogs (or more generally, websites), big sites are, well, bigger, but links and traffic aren't the only metrics for success. Smaller websites are smaller in those terms, but are often more specialized, and thus they do better both in terms of connecting with their visitors (or customers) and in providing a more compelling value to their visitors. Larger sites, by virtue of their popularity, simply aren't able to interact with visitors as effectively. This is assuming, of course, that the smaller sites do a good job. My site is very small (in terms of traffic and links), but not very specialized, so it has somewhat limited appeal. However, the parts of my site that get the most traffic are the ones that are specialized (such as the Christmas Movies page, or the Asimov Guide). I think part of the reason the blog has never really caught on is that I cover a very wide range of topics, thus diluting the potential specialized value of any single topic.
The same can be said for online music sales. They still conform to a power law distribution, but what we're going to see is increasing sales of more diverse genres and bands. We're in the process of switching from a system in which only the top 20% are considered profitable, to one where 99% are valuable. This seems somewhat counterintuitive for a few reasons:
The first is we forget that the 20 percent rule in the entertainment industry is about hits, not sales of any sort. We're stuck in a hit-driven mindset - we think that if something isn't a hit, it won't make money and so won't return the cost of its production. We assume, in other words, that only hits deserve to exist. But Vann-Adib�, like executives at iTunes, Amazon, and Netflix, has discovered that the "misses" usually make money, too. And because there are so many more of them, that money can add up quickly to a huge new market.The need to figure out what people want out of a diverse pool of options is where self-organizing systems come into the picture. A good example is Amazon's recommendations engine, and their ability to aggregate various customer inputs into useful correlations. Their "customers who bought this item also bought" lists (and the litany of variations on that theme), more often than not, provide a way to traverse the long tail. They encourage customer participation, allowing customers to write reviews, select lists, and so on, providing feedback loops that improve the quality of recommendations. Note that none of these features was designed to directly sell more items. The focus was on allowing an efficient system of collaborative feedback. Good recommendations are an emergent result of that system. Similar features are available in the online music services, and the Wired article notes:
For instance, the front screen of Rhapsody features Britney Spears, unsurprisingly. Next to the listings of her work is a box of "similar artists." Among them is Pink. If you click on that and are pleased with what you hear, you may do the same for Pink's similar artists, which include No Doubt. And on No Doubt's page, the list includes a few "followers" and "influencers," the last of which includes the Selecter, a 1980s ska band from Coventry, England. In three clicks, Rhapsody may have enticed a Britney Spears fan to try an album that can hardly be found in a record store.Obviously, these systems aren't perfect. As I've mentioned before, a considerable amount of work needs to be done with respect to the aggregation and correlation aspects of these systems. Amazon and the online music services have a good start, and weblogs are trailing along behind them a bit, but the nature of self-organizing systems dictates that you don't get a perfect solution to start, but rather a steadily improving system. What's becoming clear, though, is that the little guys are (collectively speaking) just as important as the juggernauts, and that's why I'm not particularly upset that my blog won't be wildly popular anytime soon.
Posted by Mark on January 16, 2005 at 08:07 PM .: link :.
Sunday, December 12, 2004
I've been doing a lot of reading and thinking about the concepts discussed in my last post. It's a fascinating, if a little bewildering, topic. I'm not sure I have a great handle on it, but I figured I'd share a few thoughts.
There are many systems that are incredibly flexible, yet they came into existence, grew, and self-organized without any actual planning. Such systems are often referred to as Stigmergic Systems. To a certain extent, free markets have self-organized, guided by such emergent effects as Adam Smith's "invisible hand". Many organisms are able to quickly adapt to changing conditions using a technique of continuous reproduction and selection. To an extent, there are forces on the internet that are beginning to self-organize and produce useful emergent properties, blogs among them.
Such systems are difficult to observe, and it's hard to really get a grasp on what a given system is actually indicating (or what properties are emerging). This is, in part, the way such systems are supposed to work. When many people talk about blogs, they find it hard to believe that a system composed mostly of small, irregularly updated, and downright mediocre (if not worse) blogs can have truly impressive emergent properties (I tend to model the ideal output of the blogosphere as an information resource). Believe it or not, blogging wouldn't work without all the crap. There are a few reasons for this:
The System Design: The idea isn't to design a perfect system. The point is that these systems aren't planned, they're self-organizing. What we design are systems which allow this self-organization to occur. In nature, this is accomplished through constant reproduction and selection (for example, some biological systems can be represented as a function of genes. There are hundreds of thousands of genes, with a huge and diverse number of combinations. Each combination can be judged based on some criteria, such as survival and reproduction. Nature introduces random mutations so that gene combinations vary. Efficient combinations are "selected" and passed on to the next generation through reproduction, and so on).
The important thing with respect to blogs are the tools we use. To a large extent, blogging is simply an extension of many mechanisms already available on the internet, most especially the link. Other weblog specific mechanisms like blogrolls, permanent-links, comments (with links of course) and trackbacks have added functionality to the link and made it more powerful. For a number of reasons, weblogs tend to be affected by power-law distribution, which spontaneously produces a sort of hierarchical organization. Many believe that such a distribution is inherently unfair, as many excellent blogs don't get the attention they deserve, but while many of the larger bloggers seek to promote smaller blogs (some even providing mechanisms for promotion), I'm not sure there is any reliable way to systemically "fix" the problem without harming the system's self-organizational abilities.
In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution.This self-organization is one of the important things about weblogs; any attempt to get around it will end up harming you in the long run as the important thing is to find a state in which weblogs are working most efficiently. How can the weblog community be arranged to self-organize and find its best configuration? That is what the real question is, and that is what we should be trying to accomplish (emphasis mine):
...although the purpose of this example is to build an information resource, the main strategy is concerned with creating an efficient system of collaboration. The information resource emerges as an outcome if this is successful.Failure is Important: Self-Organizing systems tend to have attractors (a preferred state of the system), such that these systems will always gravitate towards certain positions (or series of positions), no matter where they start. Surprising as it may seem, self-organization only really happens when you expose a system in a steady state to an environment that can destabilize it. By disturbing a steady state, you might cause the system to take up a more efficient position.
It's tempting to dismiss weblogs as a fad because so many of them are crap. But that crap is actually necessary because it destabilizies the system. Bloggers often add their perspective to the weblog community in the hopes that this new information will change the way others think (i.e. they are hoping to induce change - this is roughly referred to as Stigmergy). That new information will often prompt other individuals to respond in some way or another (even if not directly responding). Essentially, change is introduced in the system and this can cause unpredictable and destabilizing effects. Sometimes this destabilization actually helps the system, sometimes (and probably more often than not) it doesn't. Irregardless of its direct effects, the process is essential because it is helping the system become increasingly comprehensive. I touched on this in my last post among several others in which I claim that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. An individual blog may fail to solve a problem, but that failure is important too when you look at the systemic level. Of course, all of this is also muddying the waters and causing the system to deteriorate to a state where it is less efficient to use. For every success story like Rathergate, there are probably 10 bizarre and absurd conspiracy theories to contend with.
This is the dilemma faced by all biological systems. The effects that cause them to become less efficient are also the effects that enable them to evolve into more efficient forms. Nature solves this problem with its evolutionary strategy of selecting for the fittest. This strategy makes sure that progress is always in a positive direction only.So what weblogs need is a selection process that separates the good blogs from the bad. This ties in with the aforementioned power-law distribution of weblogs. Links, be they blogroll links or links to an individual post, essentially represent a sort of currency of the blogosphere and provide an essential internal feedback loop. There is a rudimentary form of this sort of thing going on, and it has proven to be very successful (as Jeremy Bowers notes, it certainly seems to do so much better than the media whose selection process appears to be simple heuristics). However, the weblog system is still young and I think there is considerable room for improvement in its selection processes. We've only hit the tip of the iceberg here. Syndication, aggregation, and filtering need to improve considerably. Note that all of those things are systemic improvements. None of them directly act upon the weblog community or the desired informational output of the community. They are improvements to the strategy of creating an efficient system of collaboration. A better informational output emerges as an outcome if the systemic improvements are successful.
This is truly a massive subject, and I'm only beginning to understand some of the deeper concepts, so I might end up repeating myself a bit in future posts on this subject, as I delve deeper into the underlying concepts and gain a better understanding. The funny thing is that it doesn't seem like the subject itself is very well defined, so I'm sure lots will be changing in the future. Below are a few links to information that I found helpful in writing this post.
Posted by Mark on December 12, 2004 at 11:15 PM .: link :.
Sunday, December 05, 2004
An Epic in Parallel Form
Tyler Cowen has an interesting post on the scholarly content of blogging in which he speculates as to how blogging and academic scholarship fit together. In so doing he makes some general observations about blogging:
Blogging is a fundamentally new medium, akin to an epic in serial form, but combining the functions of editor and author. Who doesn't dream of writing an epic?It's an interesting perspective. Many blogs are general in subject, but some of the ones that really stand out have some sort of narrative (for lack of a better term) that you can follow from post to post. As Cowen puts it, an "epic in serial form." The suggestion that reading a single blog many times is more rewarding than reading the best posts from many different blogs is interesting. But while a single blog may give you a broad view of what a field is about, it can also be rewarding to aggregate the specific views of a wide variety of individuals, even biased and partisan individuals. As Cowen mentions, the blogosphere as a whole is the relevant unit of analysis. Even if each individual view is unimpressive on its own, that may not be the case when taken collectively. In a sense, while each individual is writing a flawed epic in serial form, they are all contributing to an epic in parallel form.
Which brings up another interesting aspect of blogs. When the blogosphere tackles a subject, it produces a diverse set of opinions and perspectives, all published independently by a network of analysts who are all doing work in parallel. The problem here is that the decentralized nature of the blogosphere makes aggregation difficult. Determining a group as large and diverse as the blogosphere's "answer" based on all of the disparate information they have produced is incredibly difficult, especially when the majority of data represents opinions of various analysts. A deficiency in aggregation is part of where groupthink comes from, but some groups are able to harness their disparity into something productive. The many are smarter than the few, but only if the many are able to aggregate their data properly.
In theory, blogs represent a self-organizing system that has the potential to evolve and display emergent properties (a sort of human hive mind). In practice, it's a little more difficult to say. I think it's clear that the spontaneous appearance of collective thought, as implemented through blogs or other communication systems, is happening frequently on the internet. However, each occurrence is isolated and only represents an incremental gain in productivity. In other words, a system will sometimes self-organize in order to analyze a problem and produce an enormous amount of data which is then aggregated into a shared vision (a vision which is much more sophisticated than anything that one individual could come up with), but the structure that appears in that case will disappear as the issue dies down. The incredible increase in analytic power is not a permanent stair step, nor is it ubiquitous. Indeed, it can also be hard to recognize the signal in a great sea of noise.
Of course, such systems are constantly and spontaneously self-organizing; themselves tackling problems in parallel. Some systems will compete with others, some systems will organize around trivial issues, some systems won't be nearly as effective as others. Because of this, it might be that we don't even recognize when a system really transcends its perceived limitations. Of course, such systems are not limited to blogs. In fact they are quite common, and they appear in lots of different types of systems. Business markets are, in part, self-organizing, with emergent properties like Adam Smith's "invisible hand". Open Source software is another example of a self-organizing system.
Interestingly enough, this subject ties in nicely with a series of posts I've been working on regarding the properties of Reflexive documentaries, polarized debates, computer security, and national security. One of the general ideas discussed in those posts is that an argument achieves a higher degree of objectivity by embracing and acknowledging its own biases and agenda. Ironically, in acknowledging one's own subjectivity, one becomes more objective and reliable. This applies on an individual basis, but becomes much more powerful when it is part of an emergent system of analysis as discussed above. Blogs are excellent at this sort of thing precisely because they are made up of independent parts that make no pretense at objectivity. It's not that any one blog or post is particularly reliable in itself, it's that blogs collectively are more objective and reliable than any one analyst (a journalist, for instance), despite the fact that many blogs are mediocre at best. The news media represents a competing system (the journalist being the media's equivalent of the blogger), one that is much more rigid and unyielding. The interplay between blogs and the media is fascinating, and you can see each medium evolving in response to the other (the degree to which this is occurring is naturally up for debate). You might even be able to make the argument that blogs are, themselves, emergent properties of the mainstream media.
Personally, I don't think I have that exact sort of narrative going here, though I do believe I've developed certain thematic consistencies in terms of the subjects I cover here. I'm certainly no expert and I don't post nearly often enough to establish the sort of narrative that Cowen is talking about, but I do think a reader would benefit from reading multiple posts. I try to make up for my low posting frequency by writing longer, more detailed posts, often referencing older posts on similar subjects. However, I get the feeling that if I were to break up my posts into smaller, more digestible pieces, the overall time it would take to read and produce the same material would be significantly longer. Of course, my content is rarely scholarly in nature, and my subject matter varies from week to week as well, but I found this interesting to think about nonetheless.
I think I tend to be more of an aggregator than anything else, which is interesting because I've never thought about what I do in those terms. It's also somewhat challenging, as one of my weaknesses is being timely with information. Plus aggregation appears to be one of the more tricky aspects of a system such as the ones discussed above, and with respect to blogs, it is something which definitely needs some work...
Update 12.13.04: I wrote some more on the subject. I aslo made a minor edit to this entry, moving one paragraph lower down. No content has actually changed, but the new order flows better.
Posted by Mark on December 05, 2004 at 09:23 PM .: link :.
Sunday, November 21, 2004
This is yet another in a series of posts fleshing out ideas initially presented in a post regarding Reflexive Documentary filmmaking and the media. In short, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I expanded the scope of the concepts originally presented in that post to include a broader range of information dissemination processes, which lead to a post on computer security and a post on national security.
I had originally planned to apply the same concepts to debating in a relatively straightforward manner. I'll still do that, but recent events have lead me to reconsider my position, thus there will most likely be some unresolved questions at the end of this post.
So the obvious implication with respect to debating is that a debate can be more productive when each side exposes their own biases and agenda in making their argument. Of course, this is pretty much required by definition, but what I'm getting at here is more a matter of tactics. Debating tactics often take poor forms, with participants scoring cheap points by using intuitive but fallacious arguments.
I've done a lot of debating in various online forums, often taking a less than popular point of view (I tend to be a contrarian, and am comofortable on the defense). One thing that I've found is that as a debate heats up, the arguments become polarized. I sometimes find myself defending someone or something that I normally wouldn't. This is, in part, because a polarizing debate forces you to dispute everything your opponent argues. To concede one point irrevocably weakens your position, or so it seems. Of course, the fact that I'm a contrarian, somewhat competitive, and stubborn also plays a part this. Emotions sometimes flare, attitudes clash, and you're often left feeling dirty after such a debate.
None of which is to say that polarized debate is bad. My whole reason for participating in such debates is to get others to consider more than one point of view. If a few lurkers read a debate and come away from it confused or at least challenged by some of the ideas presented, I consider that a win. There isn't anything inherently wrong with partisanship, and as frustrating as some debates are, I find myself looking back on them as good learning experiences. In fact, taking an extreme position and thinking from that biased standpoint helps you understand not only that viewpoint, but the extreme opposite as well.
The problem with such debates, however, is that they really are divisive. A debate which becomes polarized might end up providing you with a more balanced view of an issue, but such debates sometimes also present an unrealistic view of the issue. An example of this is abortion. Debates on that topic are usually heated and emotional, but the issue polarizes, and people who would come down somewhere around the middle end up arguing an extreme position for or against.
Again, I normally chalk this polarization up as a good thing, but after the election, I'm beginning to see the wisdom in perhaps pursuing a more moderated approach. With all the red/blue dichotomies being thrown around with reckless abandon, talk of moving to Canada and even talk of secesssion(!), it's pretty obvious that the country has become overly-polarized.
I've been writing about Benjamin Franklin recently on this here blog, and I think his debating style is particularly apt to this discussion:
Franklin was worried that his fondness for conversation and eagerness to impress made him prone to "prattling, punning and joking, which only made me acceptable to trifling company." Knowledge, he realized, "was obtained rather by the use of the ear than of the tongue." So in the Junto, he began to work on his use of silence and gentle dialogue.This contrasts rather sharply with what passes for civilized debate these days. Franklin actually considered it rude to directly contradict or dispute someone, something I had always found to be confusing. I typically favor a frank exchange of ideas (i.e. saying what you mean), but I'm beginning to come around. In the wake of the election, a lot of advice has been offered up for liberals and the left, and a lot of suggestions center around the idea that they need to "reach out" to more voters. This has been recieved with indignation by liberals and leftists, and one could hardly blame them. From their perspective, conservatives and the right are just as bad if not worse and they read such advice as if they're being asked to give up their values. Irrespective of which side is right, I think the general thrust of the advice is that liberal arguments must be more persuasive. No matter how much we might want to paint the country into red and blue partitions, if you really want to be accurate, you'd see only a few small areas of red and blue drowning in a sea of purple. The Democrats don't need to convince that many people to get a more favorable outcome in the next election.
And so perhaps we should be fighting the natural polarization of a debate and take a cue from Franklin, who stressed the importance of deferring, or at least pretending to defer, to others:
"Would you win the hearts of others, you must not seem to vie with them, but to admire them. Give them every opportunity of displaying their own qualifications, and when you have indulged their vanity, they will praise you in turn and prefer you above others... Such is the vanity of mankind that minding what others say is a much surer way of pleasing them than talking well ourselves."There are weaknesses to such an approach, especially if your opponent does not return the favor, but I think it is well worth considering. That the country has so many opposing views is not necessarily bad, and indeed, is a necessity in democracy for ideas to compete. But perhaps we need less spin and more moderation... In his essay "Apology for Printers" Franklin opines:
"Printers are educated in the belief that when men differ in opinion, both sides ought equally to have the advantage of being heard by the public; and that when Truth and Error have fair play, the former is always an overmatch for the latter."Indeed.
Update: Andrew Olmsted posted something along these lines, and he has a good explanation as to why debates often go south:
I exaggerate for effect, but anyone spending much time on site devoted to either party quickly runs up against the assumption that the other side isn't just wrong, but evil. And once you've made that assumption, it would be wrong to even negotiate with the other side, because any compromise you make is taking the country one step closer to that evil. The enemy must be fought tooth and nail, because his goals are so heinous.I don't know that we're a majority, as Olmsted hopes, but there's more than just a few of us, at least...
Posted by Mark on November 21, 2004 at 03:29 PM .: link :.
Thursday, November 11, 2004
Arranging Interests in Parallel
I have noticed a tendency on my part to, on occasion, quote a piece of fiction, and then comment on some wisdom or truth contained therein. This sort of thing is typically frowned upon in rigorous debate as fiction is, by definition, contrived and thus referencing it in a serious argument is rightly seen as undesirable. Fortunately for me, this blog, though often taking a serious tone, is ultimately an exercise in thinking for myself. The point is to have fun. This is why I will sometimes quote fiction to make a point, and it's also why I enjoy questionable exercises like speculating about historical figures. As I mentioned in a post on Benjamin Franklin, such exercises usually end up saying more about me and my assumptions than anything else. But it's my blog, so that is more or less appropriate.
Astute readers must at this point be expecting to recieve a citation from a piece of fiction, followed by an application of the relevant concepts to some ends. And they would be correct.
Early on in Neal Stephenson's novel The System of the World, Daniel Waterhouse reflects on what is required of someone in his position:
He was at an age where it was never possible ot pursue one errand at a time. He must do many at once. He guessed that people who had lived right and arranged things properly must have it all rigged so that all of their quests ran in parallel, and reinforced and supported one another just so. They gained reputations as conjurors. Others found their errands running at cross purposes and were never able to do anything; they ended up seeming mad, or else percieived the futility of what they were doing and gave up, or turned to drink.Naturally, I believe there is some truth to this. In fact, the life of Benjamin Franklin, a historical figure from approximately the same time period as Dr. Waterhouse, provides us with a more tangible reference point.
Franklin was known to mix private interests with public ones, and to leverage both to further his business interests. The consummate example of Franklin's proclivities was the Junto, a club of young workingmen formed by Franklin in the fall of 1727. The Junto was a small club composed of enterprising tradesman and artisans who discussed issues of the day and also endeavored to form a vehicle for the furtherance of their own careers. The enterprise was typical of Franklin, who was always eager to form associations for mutual benefit, and who aligned his interests so they ran in parallel, reinforcing and supporting one another.
A more specific example of Franklin's knack for aligning interests is when he produced the first recorded abortion debate in America. At the time, Franklin was running a print shop in Philadelphia. His main competitor, Andrew Bradford, published the town's only newspaper. The paper was meager, but very profitable in both moneys and prestige (which led him to be more respected by merchants and politicians, and thus more likely to get printing jobs), and Franklin decided to launch a competing newspaper. Unfortunately, another rival printer, Samuel Keimer, caught wind of Franklin's plan and immediately launched a hastily assembled newspaper of his own. Franklin, realizing that it would be difficult to launch a third paper right away, vowed to crush Keimer:
In a comptetitive bank shot, Franklin decided to write a series of anonymous letters and essays, along the lines of the Silence Dogood pieces of his youth, for Bradford's [American Weekly Mercury] to draw attention away from Keimer's new paper. The goal was to enliven, at least until Keimer was beaten, Bradford's dull paper, which in its ten years had never puplished any such features.Franklin's many actions of the time certainly weren't running at cross purposes, and he did manage to align his interests in parallel. He truly was a master, and we'll be hearing more about him on this blog soon.
This isn't the first time I've written about this subject before either. In a previous post, On the Overloading of Information, I noted one of the main reasons why blogging continues to be an enjoyable activity for me, despite changing interests and desires:
I am often overwhelmed by a desire to consume various things - books, movies, music, etc... The subject of such things is also varied and, as such, often don't mix very well. That said, the only thing I have really found that works is to align those subjects that do mix in such a way that they overlap. This is perhaps the only reason blogging has stayed on my plate for so long: since the medium is so free-form and since I have absolute control over what I write here and when I write it, it is easy to align my interests in such a way that they overlap with my blog (i.e. I write about what interests me at the time).One way you can tell that my interests have shifted over the years is that the format and content of my writing here has also changed. I am once again reminded of Neal Stephenson's original minimalist homepage in which he speaks of his ongoing struggle against what Linda Stone termed as "continuous partial attention," as that curious feature of modern life only makes the necessity of aligning interests in parallel that much more important.
Aligning blogging with my other core interests, such as reading fiction, is one of the reasons I frequently quote fiction, even in reference to a serious topic. Yes, such a practice is frowned upon, but blogging is a hobby, the idea of which is to have fun. Indeed, Glenn Reynolds, progenitor of one of the most popular blogging sites around, also claims to blog for fun, and interestingly enough, he has quoted fiction in support of his own serious interests as well (more than once). One other interesting observation is that all references to fiction in this post, including even Reynolds' references, are from Neal Stephenson's novels. I'll leave it as an exercise for the reader to figure out what significance, if any, that holds.
Posted by Mark on November 11, 2004 at 11:45 PM .: link :.
Sunday, November 07, 2004
Open Source Security
A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. In a follow up post, I examined how this concept could be applied to a broader range of information dissemination processes. That post focused on computer security and how full disclosure of system vulnerabilities actually improves security in the long run. Ironically, public scrutiny is the only reliable way to improve security.
Full disclosure is certainly not perfect. By definition, it increases risk in the short term, which is why opponents are able to make persuasive arguments against it. Like all security, it is a matter of tradeoffs. Does the long term gain justify the short term risk? As I'm fond of saying, human beings don't so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn't quite as bad as the old). There is no solution here, only a less disadvantaged system.
Now I'd like to broaden the subject even further, and apply the concept of open security to national security. With respect to national security, the stakes are higher and thus the argument will be more difficult to sustain. If people are unwilling to deal with a few computer viruses in the short term in order to increase long term security, imagine how unwilling they'll be to risk a terrorist attack, even if that risk ultimately closes a few security holes. This may be prudent, and it is quite possible that a secrecy approach is more necessary at the national security level. Secrecy is certainly a key component of intelligence and other similar aspects of national security, so open security techniques would definitely not be a good idea in those areas.
However, there are certain vulnerabilities in processes and systems we use that could perhaps benefit from open security. John Robb has been doing some excellent work describing how terrorists (or global guerillas, as he calls them) can organize a more effective campaign in Iraq. He postulates a Bazaar of violence, which takes its lessons from the open source programming community (using Eric Raymond's essay The Cathedral and the Bazaar as a starting point):
The decentralized, and seemingly chaotic guerrilla war in Iraq demonstrates a pattern that will likely serve as a model for next generation terrorists. This pattern shows a level of learning, activity, and success similar to what we see in the open source software community. I call this pattern the bazaar. The bazaar solves the problem: how do small, potentially antagonistic networks combine to conduct war?Not only does the bazaar solve the problem, it appears able to scale to disrupt larger, more stable targets. The bazaar essentially represents the evolution of terrorism as a technique into something more effective: a highly decentralized strategy that is nevertheless able to learn and innovate. Unlike traditional terrorism, it seeks to leverage gains from sabotaging infrastructure and disrupting markets. By focusing on such targets, the bazaar does not experience diminishing returns in the same way that traditional terrorism does. Once established, it creats a dynamic that is very difficult to disrupt.
I'm a little unclear as to what the purpose of the bazaar is - the goal appears to be a state of perpetual violence that is capable of keeping a nation in a position of failure/collapse. That our enemies seek to use this strategy in Iraq is obvious, but success essentially means perpetual failure. What I'm unclear on is how they seek to parlay this result into a successful state (which I assume is their long term goal - perhaps that is not a wise assumption).
In any case, reading about the bazaar can be pretty scary, especially when news from Iraq seems to correllate well with the strategy. Of course, not every attack in Iraq correllates, but this strategy is supposedly new and relatively dynamic. It is constantly improving on itself. They are improvising new tactics and learning from them in an effort to further define this new method of warfare.
As one of the commenters on his site notes, it is tempting to claim that John Robb's analysis is essentially an instruction manual for a guerilla organization, but that misses the point. It's better to know where we are vulnerable before we discover that some weakness is being exploited.
One thing that Robb is a little short on is actual, concrete ways with which to fight the bazaar (there are some, and he has pointed out situations where U.S. forces attempted to thwart bazaar tactics, but such examples are not frequent). However, he still provides a valuable service in exposing security vulnerabilities. It seems appropriate that we adopt open source security techniques in order to fight an enemy that employs an open source platform. Vulnerabilities need to be exposed so that we may devise effective counter-measures.
Posted by Mark on November 07, 2004 at 08:56 PM .: link :.
Sunday, October 10, 2004
Open Security and Full Disclosure
A few weeks ago, I wrote about what the mainstream media could learn from Reflexive documentary filmmaking. Put simply, Reflexive Documentaries achieve a higher degree of objectivity by embracing and acknowledging their own biases and agenda. Ironically, by acknowledging their own subjectivity, these films are more objective and reliable. I felt that the media could learn from such a model. Interestingly enough, such concepts can be applied to wider scenarios concerning information dissemination, particularly security.
Bruce Schneier has often written about such issues, and most of the information that follows is summarized from several of his articles, recent and old. The question with respect to computer security systems is this: Is publishing computer and network or software vulnerability information a good idea, or does it just help attackers?
When such a vulnerability exists, it creates what Schneier calls a Window of Exposure in which the vulnerability can still be exploited. This window exists until the vulnerability is patched and installed. There are five key phases which define the size of the window:
Phase 1 is before the vulnerability is discovered. The vulnerability exists, but no one can exploit it. Phase 2 is after the vulnerability is discovered, but before it is announced. At that point only a few people know about the vulnerability, but no one knows to defend against it. Depending on who knows what, this could either be an enormous risk or no risk at all. During this phase, news about the vulnerability spreads -- either slowly, quickly, or not at all -- depending on who discovered the vulnerability. Of course, multiple people can make the same discovery at different times, so this can get very complicated.The goal is to minimize the impact of the vulnerability by reducing the window of exposure (the area under the curve in figure 1). There are two basic approaches: secrecy and full disclosure.
The secrecy approach seeks to reduce the window of exposure by limiting public access to vulnerability information. In a different essay about network outages, Schneier gives a good summary of why secrecy doesn't work well:
The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they're lost they're lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there's no way to recover security. Trying to base security on secrecy is just plain bad design.Secrecy may work on paper, but in practice, keeping vulnerabilities secret removes motivation to fix the problem (it is possible that a company could utilize secrecy well, but it is unlikely that all companies would do so and it would be foolish to rely on such competency). The other method of reducing the window of exposure is to disclose all information about the vulnerablity publicly. Full Disclosure, as this method is called, seems counterintuitive, but Schneier explains:
Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn't bother fixing them, believing in the security of secrecy.Ironically, publishing details about vulnerabilities leads to a more secure system. Of course, this isn't perfect. Obviously publishing vulnerabilities constitutes a short term danger, and can sometimes do more harm than good. But the alternative, secrecy, is worse. As Schneier is fond of saying, security is about tradeoffs. As I'm fond of saying, human beings don't so much solve problems as they trade one set of disadvantages for another (with the hope that the new set isn't quite as bad as the old). There is no solution here, only a less disadvantaged system.
This is what makes advocating open security systems like full disclosure difficult. Opponents will always be able to point to its flaws, and secrecy advocates are good at exploiting the intuitive (but not necessarily correct) nature of their systems. Open security systems are just counter-intuitive, and there is a tendency to not want to increase risk in the short term (as things like full disclosure does). Unfortunately, that means that the long term danger increases, as there is less incentive to fix security problems.
By the way, Schneier has started a blog. It appears to be made up of the same content that he normally releases monthly in the Crypto-Gram newsletter, but spread out over time. I think it will be interesting to see if Schneier starts responding to events in a more timely fashion, as that is one of the keys to the success of blogs (and it's something that I'm bad at, unless news breaks on a Sunday).
Posted by Mark on October 10, 2004 at 11:56 AM .: link :.
Sunday, October 03, 2004
Monkey Research Squad Strikes Again
My crack squad of monkey researchers comes through again with a few interesting links:
Posted by Mark on October 03, 2004 at 02:44 PM .: link :.
Wednesday, September 15, 2004
A Reflexive Media
"To write or to speak is almost inevitably to lie a little. It is an attempt to clothe an intangible in a tangible form; to compress an immeasurable into a mold. And in the act of compression, how the Truth is mangled and torn!" - Anne Murrow LindberghThere are many types of documentary films. The most common form of documentary is referred to as Direct Address (aka Voice of God). In such a documentary, the viewer is directly acknowledged, usually through narration and voice-overs. There is very little ambiguity and it is pretty obvious how you're expected to interpret these types of films. Many television and news programs use this style, to varying degrees of success. Ken Burns' infamous Civil War and Baseball series use this format eloquently, but most traditional propaganda films also fall into this category (a small caveat: most films are hybrids, rarely falling exclusively into one category). Such films give the illusion of being an invisible witness to certain events and are thus very persuasive and powerful.
The problem with Direct Address documentaries is that they grew out of a belief that Truth is knowable through objective facts. In a recent sermon he posted on the web, Donald Sensing spoke of the difference between facts and the Truth:
Truth and fact are not the same thing. We need only observe the presidential race to discern that. John Kerry and allies say that the results of America's war against Iraq is mostly a failure while George Bush and allies say they are mostly success. Both sides have the same facts, but both arrive at a different "truth."I'm not sure Sensing chose the best example here, but the concept itself is sound. Any documentary is biased in the Truth that it presents, even if the facts are undisputed. In a sense objectivity is impossible, which is why documentary scholar Bill Nichols admires films which seek to contextualize themselves, exposing their limitations and biases to the audience.
Reflexive Documentaries use many devices to acknowledge the filmmaker's presence, perspective, and selectivity in constructing the film. It is thought that films like this are much more honest about their subjectivity, and thus provide a much greater service to the audience.
An excellent example of a Reflexive documentary is Errol Morris' brilliant film, The Thin Blue Line. The film examines the "truth" around the murder of a Dallas policeman. The use of colored lighting throughout the film eventually correlates with who is innocent or guilty, and Morris is also quite manipulative through his use of editing - deconstructing and reconstructing the case to demonstrate just how problematic finding the truth can be. His use of framing calls attention to itself, daring the audience to question the intents of the filmmakers. The use of interviews in conjunction with editing is carefully structured to demonstrate the subjectivity of the film and its subjects. As you watch the movie, it becomes quite clear that Morris is toying with you, the viewer, and that he wants you to be critical of the "truth" he is presenting.
Ironically, a documentary becomes more objective when it acknowledges its own biases and agenda. In other words, a documentary becomes more objective when it admits its own subjectivity. There are many other forms of documentary not covered here (i.e. direct cinema/cinema verité, interview-based, performative, mock-documentaries, etc... most of which mesh together as they did in Morris' Blue Line to form a hybrid).
In Bill Nichols' seminal essay, Voice of Documentary (Can't seem to find a version online), he says:
"Documentary filmmakers have a responsibility not to be objective. Objectivity is a concept borrowed from the natural sciences and from journalism, with little place in the social sciences or documentary film."I always found it funny that Nichols equates the natural sciences with journalism, as it seems to me that modern journalism is much more like a documentary than a natural science. As such, I think the lessons of Reflexive documentaries (and its counterparts) should apply to the realm of journalism.
The media emphatically does not acknowledge their biases. By bias, I don't mean anything as short-sighted as liberal or conservative media bias, I mean structural bias of which political orientation is but a small part (that link contains an excellent essay on the nature of media bias, one that I find presents a more complete picture and is much more useful than the tired old ideological bias we always hear so much about*). Such subjectivity does exist in journalism, yet the media stubbornly persists in their firm belief that they are presenting the objective truth.
The recent CBS scandal, consisting of a story bolstered by what appear to be obviously forged documents, provides us with an immediate example. Terry Teachout makes this observation regarding how few prominent people are willing to admit that they are wrong:
I was thinking today about how so few public figures are willing to admit (for attribution, anyway) that they’ve done something wrong, no matter how minor. But I wasn’t thinking of politicians, or even of Dan Rather. A half-remembered quote had flashed unexpectedly through my mind, and thirty seconds’ worth of Web surfing produced this paragraph from an editorial in a magazine called World War II:As he points out later in his post, I don't think we're going to be seeing such admissions any time soon. Again, CBS provides a good example. Rather than admit the possibility that they may be wrong, their response to the criticisms of their sources has been vague, dismissive, and entirely reliant on their reputation as a trustworthy staple of journalism. They have not yet comprehensively responded to any of the numerous questions about the documents; questions which range from "conflicting military terminology to different word-processing techniques". It appears their strategy is to escape the kill zone by focusing on the "truth" of their story, that Bush's service in the Air National Guard was less than satisfactory. They won't admit that the documents are forgeries, and by focusing on the arguably important story, they seek to distract the issue away from their any discussion of their own wrongdoing - in effect claiming that the documents aren't important because the story is "true" anyway.Soon after he had completed his epic 140-mile march with his staff from Wuntho, Burma, to safety in India, an unhappy Lieutenant General Joseph W. Stilwell was asked by a reporter to explain the performance of Allied armies in Burma and give his impressions of the recently concluded campaign. Never one to mince words, the peppery general responded: "I claim we took a hell of a beating. We got run out of Burma and it is as humiliating as hell. I think we ought to find out what caused it, and go back and retake it."Stilwell spoke those words sixty-two years ago. When was the last time that such candor was heard in like circumstances? What would happen today if similar words were spoken by some equally well-known person who’d stepped in it up to his eyebrows?
Should they admit they were wrong? Of course they should, but they probably won't. If they won't, it will not be because they think the story is right, and not because they think the documents are genuine. They won't admit wrongdoing and they won't correct their methodologies or policies because to do so would be to acknowledge to the public that they are less than just an objective purveyor of truth.
Yet I would argue that they should do so, that it is their duty to do so just as it is the documentarian's responsibility to acknowledge their limitations and agenda to their audience.
It is also interesting to note that weblogs contrast the media by doing just that. Glenn Reynolds notes that the internet is a low-trust medium, which paradoxically indicates that it is more trustworthy than the media (because blogs and the like acknowledge their bias and agenda, admit when they're wrong, and correct their mistakes):
The Internet, on the other hand, is a low-trust environment. Ironically, that probably makes it more trustworthy.The mainstream media as we know it is on the decline. They will no longer be able to get by on their brand or their reputations alone. The collective intelligence of the internet, combined with the natural reflexiveness of its environment, has already provided a challenge to the underpinnings of journalism. On the internet, the dominance of the media is constantly challenged by individuals who question the "truth" presented to them in the media. I do not think that blogs have the power to eclipse the media, but their influence is unmistakable. The only question that remains is if the media will rise to the challenge. If the way CBS has reacted is any indication, then, sadly, we still have a long way to go.
* Yes, I do realize the irony of posting this just after I posted about liberal and conservative tendencies in online debating, and I hinted at that with my "Update" in that post.
Thanks to Jay Manifold for the excellent Structural Bias of Journalism link.
Posted by Mark on September 15, 2004 at 11:07 PM .: link :.
Thursday, September 09, 2004
Benjamin Franklin: American, Blogger & LIAR!
I've been reading a biography of Benjamin Franklin (Benjamin Franklin: An American Life by Walter Isaacson), and several things have struck me about the way in which he conducted himself. As with a lot of historical figures, there is a certain aura that surrounds the man which is seen as impenetrable today, but it's interesting to read about how he was perceived in his time and contrast that with how he would be perceived today. As usual, there is a certain limit to the usefulness of such speculation, as it necessarily must be based on certain assumptions that may or may not be true (as such this post might end up saying more about me and my assumptions than Franklin!). In any case, I find such exercises interesting, so I'd like to make a few observations.
The first is that he would have probably made a spectacular blogger, if he chose to engage in such an activity (Ken thinks he would definitely be a blogger, but I'm not so sure). He not only has all the makings of a wonderful blogger, I think he'd be extremely creative with the format. He was something of a populist, his writing was humorous, self-deprecating, and often quite profound at the same time. His range of knowledge and interest was wide, and his tone was often quite congenial. All qualities valued in any blogger.
He was incredibly prolific (another necessity for a successful blog), and often wrote the letters to his paper himself under assumed names, and structured them in such a way as to gently deride his competitors while making some other interesting point. For instance, Franklin once published two letters, written under two different pseudonyms, in which he manufactured the first recorded abortion debate in America - not because of any strong feelings on the issue, but because he knew it would sell newspapers and because his competitor was serializing entries from an encyclopedia at the time and had started with "Abortion." Thus the two letters were not only interesting in themselves, but also provided ample opportunity to impugn his competitor.
On thing I think we'd see in a Franklin blog is entire comment threads consisting of a full back-and-forth debate, with all entries written by Franklin himself under assumed names. I can imagine him working around other "real" commenters with his own pseudonyms, and otherwise having fun with the format (he'd almost certainly make a spectacular troll as well).
If there was ever a man who could make a living out of blogging, I think Franklin was it. This is, in part, why I'm not sure he'd truly end up as a pure blogger, as even in his day, Franklin was known to mix private interests with public ones, and to leverage both to further his business interests. He could certainly have organized something akin to The Junto on the internet, where a group of likeminded fellows got together (whether it be physically or virtually over the internet) and discussed issues of the day and also endeavored to form a vehicle for the furtherance of their own careers.
Then again, perhaps Franklin would simply have started his own newspaper and had nothing to do with blogging (or perhaps he would attempt to mix the two in some new way). The only problem would be that the types of satire and hoaxes he could get away with in his newspapers in the early 18th century would not really be possible in today's atmosphere (such playfulness has long ago left the medium, but is alive and well in the blogosphere, which is one thing that would tend to favor his participation).
Which brings me to my next point: I have to wonder how Franklin would have done in today's political climate. Would he have been able to achieve political prominence? Would he want to? Would his anonymous letters, hoaxes, and in his newspapers have gotten him into trouble? I can imagine the self-righteous indignation now: "His newspaper is a farce! He's a LIAR!" And the Junto? I don't even want to think of the conspiracy theories that could be conjured with that sort of thing in mind.
One thing Franklin was exceptionally good at was managing his personal image, but would he be able to do so in today's atmosphere? I suspect he would have done well in our time, but I don't know how politically active he would be (and I suppose there is something to be said about his participation being partly influenced by the fact that he was a part of a revolution, not a true politician of the kind we have today). I know the basic story of his life, but I haven't gotten that far in the book, so perhaps I should revisit this subject later. And thus ends my probably inaccurate, but interesting nonetheless, discussion of Franklin in our times. Expect more references to Franklin in the future, as I have been struck by quite a few things about his life that are worth discussing today.
Posted by Mark on September 09, 2004 at 10:00 PM .: link :.
Sunday, August 22, 2004
Respecting Other Talents: David Foster writes about the dangers of "Functional Chauvinism":
A person typically spends his early career in a particular function, and interacts mainly with others in that function. And there is often an unwholesome kind of functional "patriotism" which goes beyond pride in one's own work and disparages the work done by others ("we could get this software written if those marketing idiots would just stop bothering us.")An excellent post (and typical of the work over at Photon Courier), Foster focuses on the impacts to the business world, but I remember this sort of thing being prevalent in college. I was an engineer, but I naturally had to take on a number of humanities courses in addition to my technical workload. Functional chauvinism came from both students and professors, but the people who stood out were those who avoided this pitfall and made an effort to understand and respect functional differences.
For instance, many of my fellow engineering students were pissed that they even had to take said humanities courses. After all, they were paying an exorbanent amount of money to be educated in advanced technical issues, not what some Greek guy thought 2 millennia ago (personally, I found those subjects interesting and appreciated the chance for an easy A - ok, there's a bit of chauvinism there too, but I at least respected the general idea that humanities were important).
On the other hand, there were professors who were so absorbed in their area of study that they could not conceive of a student not being spellbound and utterly fascinated by whatever they taught. For someone majoring in philosophy, that's fine, but for an engineer who considers their technical courses to be their priority, it becomes a little different. I got lucky, in that several of professors actually took into account what major their students were. Often a class would be filled with engineers or hard science majors, and these classes were made more relevant and rewarding because the professors took that into account when teaching. Other professors were not so considerate.
It is certainly understandable to have such feelings, and to a point there's no real harm done, but it can't hurt to take a closer look at what other people do either. As Foster concludes, "Respect for talents other than one's own. A key element of individual and organizational success." Indeed.
Posted by Mark on August 22, 2004 at 03:37 PM .: link :.
Checking in with my chain smoking monkey research staff, here are a few interesting links they've dug up:
Posted by Mark on August 22, 2004 at 02:32 PM .: link :.
Sunday, July 18, 2004
With great freedom, comes great responsibility...
David Foster recently wrote about a letter to the New York Times which echoed sentiments regarding Iraq that appear to be commonplace in certain circles:
While we have removed a murderous dictator, we have left the Iraqi people with a whole new set of problems they never had to face before...I've often written about the tradeoffs inherent in solving problems, and the invasion of Iraq is no exception. Let us pretend for a moment that everything that happened in Iraq over the last year went exactly as planned. Even in that best case scenario, the Iraqis would be facing "a whole new set of problems they never had to face before." There was no action that could have been taken regarding Iraq (and this includes inaction) that would have resulted in an ideal situation. We weren't really seeking to solve the problems of Iraq, so much as we were exchanging one set of problems for another.
Yes, the Iraqis are facing new problems they have never had to face before, but the point is that the new problems are more favorable than the old problems. The biggest problem they are facing is, in short, freedom. Freedom is an odd thing, and right now, halfway across the world, the Iraqis are finding that out for themselves. Freedom brings great benefits, but also great responsibility. Freedom allows you to express yourself without fear of retribution, but it also allows those you hate to express things that make your blood boil. Freedom means you have to acknowledge their views, no matter how repulsive or disgusting you may find them (there are limits, of course, but that is another subject). That isn't easy.
A little while ago, Steven Den Beste wrote about Jewish immigrants from the Soviet Union:
About 1980 (I don't remember exactly) there was a period in which the USSR permitted huge numbers of Jews to leave and move to Israel. A lot of them got off the jet in Tel Aviv and instantly boarded another one bound for New York, and ended up here.There are a lot of people who ended up in the U.S. because they were fleeing oppression, and when they got here, they were confronted with "a whole new set of problems they never had to face before." Most of them were able to adapt to the challenges of freedom and prosper, but don't confuse prosperity with utopia. These people did not solve their problems, they traded them for a set of new problems. For most of them, the problems associated with freedom were more favorable than the problems they were trying to escape from. For some, the adjustment just wasn't possible, and they returned to their homes.
Defecting North Koreans face a host of challenges upon their arrival in South Korea (if they can make it that far), including the standard freedom related problems: "In North Korea, the state allocates everything from food to jobs. Here, having to do their own shopping, banking or even eating at a food court can be a trying experience." The differences between North Korea and South Korea are so vast that many defectors cannot adapt, despite generous financial aid, job training and other assistance from civic and religious groups. Only about half of the defectors are able to wrangle jobs, but even then, it's hard to say that they've prospered. But at the same time, are their difficulties now worse than their previous difficulties? Moon Hee, a defector who is having difficulties adjusting, comments: "The present, while difficult, is still better than the past when I did not even know if there would be food for my next meal."
There is something almost paradoxical about freedom. You see, it isn't free. Yes, freedom brings benefits, but you must pay the price. If you want to live in a free country, you have to put up with everyone else being free too, and that's harder than it sounds. In a sense, we aren't really free, because the freedom we live with and aspire to is a limiting force.
On the subject of Heaven, Saint Augustine once wrote:
The souls in bliss will still possess the freedom of will, though sin will have no power to tempt them. They will be more free than everso free, in fact, from all delight in sinning as to find, in not sinning, an unfailing source of joy. ...in eternity, freedom is that more potent freedom which makes all sin impossible. - Saint Augustine, City of God (Book XXII, Chapter 30)Augustine's concept of a totally free will is seemingly contradictory. For him, freedom, True Freedom, is doing the right thing all the time (I'm vastly simplifying here, but you get the point). Outside of Heaven, however, doing the right thing, as we all know, isn't easy. Just ask Spider-Man.
I never really read the comics, but in the movies (which appear to be true to their source material) Spider-Man is all about the conflict between responsibilities and desires. Matthew Yglesias is actually upset with the second film because is has a happy ending:
Being the good guy -- doing the right thing -- really sucks, because doing the right thing doesn't just mean avoiding wrongdoing, it means taking affirmative action to prevent it. There's no time left for Peter's life, and his life is miserable. Virtue is not its own reward, it's virtue, the rewards go to the less consciencious. There's no implication that it's all worthwhile because God will make it right in the End Times, the life of the good guy is a bleak one. It's an interesting (and, I think, a correct) view and it's certainly one that deserves a skilled dramatization, which is what the film gives you right up until the very end. But then -- ta da! -- it turns out that everyone does get to be happy after all. A huge letdown.Of course, plenty of people have noted that the Spider-Man story doesn't end with the second movie, and that the third is bound to be filled with the complications of superhero dating (which are not limited to Spider-Man).
Spider-Man grapples with who he is. He has gained all sorts of powers, and with those powers, he has also gained a certain freedom. It could be very liberating, but as the saying goes: With great power comes great responsibility. He is not obligated to use his powers for good or at all, but he does. However, for a good portion of the second film he shirks his duties because a life of pure duty has totally ruined his personal life. This is that conflict between responsibilities and desires I mentioned earlier. It turns out that there are limits to Spider-Man's altruism.
For Spider-Man, it is all about tradeoffs, though he may have learned it the hard way. First he took on too much responsibility, and then too little. Will he ever strike a delicate balance? Will we? For we are all, in a manner of speaking, Spider-Man. We all grapple with similar conflicts, though they manifest in our lives with somewhat less drama. Balancing your personal life with your professional life isn't as exciting, but it can be quite challenging for some.
And so the people of Iraq are facing new challenges; problems they have never had to face before. Like Spider-Man, they're going to have to deal with their newfound responsibilites and find a way to balance them with their desires. Freedom isn't easy, and if they really want it, they'll need to do more than just avoid problems, they'll have to actively solve them. Or, rather, trade one set of problems for another. Because with great freedom, comes great responsibility.
Posted by Mark on July 18, 2004 at 09:16 PM .: link :.
Sunday, June 13, 2004
A Specific Culture
In thinking of the issues discussed in my last post, I remembered this Neal Stephenson quote from In the Beginning Was the Command Line:
The only real problem is that anyone who has no culture, other than this global monoculture, is completely screwed. Anyone who grows up watching TV, never sees any religion or philosophy, is raised in an atmosphere of moral relativism, learns about civics from watching bimbo eruptions on network TV news, and attends a university where postmodernists vie to outdo each other in demolishing traditional notions of truth and quality, is going to come out into the world as one pretty feckless human being. And--again--perhaps the goal of all this is to make us feckless so we won't nuke each other.[emphasis added] It is true that one of the things that religion gives us is a specific way of looking at and understanding the world. Further, it gives people a certain sense of belonging that is so important to us as social beings. Even if someone ends up rejecting the tenets of their faith, they have benefitted from the sense of community and gained a certain way of looking at the world that won't entirely go away.
Posted by Mark on June 13, 2004 at 09:32 PM .: link :.
Friday, June 11, 2004
Religion isn't as comforting as it seems
Steven Den Beste is an athiest, yet he is unlike any athiest I have ever met in that he seems to understand theists (in the general sense of the term) and doesn't hold their beliefs against them. As such, I have gained an immense amount of respect for him and his beliefs. He speaks with conviction about his beliefs, but he is not evangelistic.
In his latest post, he aks one of the great unanswerable questions: What am I? I won't pretend to have any of the answers, but I do object to one thing he said. It is a belief that is common among athiests (though theists are little better):
Is a virus alive? I don't know. Is a hive mind intelligent? I don't know. Is there actually an identifiable self with continuity of existence which is typing these words? I really don't know. How much would that self have to change before we decide that the continuity has been disrupted? I think I don't want to find out.[Emphasis added] The idea that these types of unanswerable questions is not troubling or easy to answer to a believer is a common one, but I also believe it to be false. Religion is no more comforting than any other system of beliefs, including athiesm. Religion does provide a vocabulary for the unanswerable, but all that does is help us grapple with the questions - it doesn't solve anything and I don't think it is any more comforting. I believe in God, but if you asked me what God really is, I wouldn't be able to give you a definitive answer. Actually, I might be able to do that, but "God is a mystery" is hardly comforting or all that useful.
Elsewhere in the essay, he refers to the Christian belief in the soul:
To a Christian, life and self are ultimately embodied in a person's soul. Death is when the soul separates from the body, and that which makes up the essence of a person is embodied in the soul (as it were).He goes on to list some conundrums that would be troubling to the believer but they all touch on the most troubling thing - what the heck is the soul in the first place? Trying to answer that is no more comforting to a theist than trying to answer the questions he's asking himself. The only real difference is a matter of vocabulary. All religion has done is shifted the focus of the question.
Den Beste goes on to say that there are many ways in which atheism is cold and unreassuring, but fails to recognize the ways in which religion is cold and unreassuring. For instance, there is no satisfactory theodicy that I have ever seen, and I've spent a lot of time studying such things (16 years of Catholic schooling baby!) A theodicy is essentially an attempt to reconcile God's existance with the existance of evil. Why does God allow evil to exist? Again, there is no satisfactory answer to that question, not the least of which because there is no satisfactory definition of both God and evil!
Now, theists often view athiests in a similar manner. While Den Beste laments the cold and unreassuring aspects of athiesm, a believer almost sees the reverse. To some believers, if you remove God from the picture, you also remove all concept of morality and responsibility. Yet, that is not the case, and Den Beste provides an excellent example of a morally responsible athiest. The grass is greener on the other side, as they say.
All of this is generally speaking, of course. Not all religions are the same, and some are more restrictive and closed-minded than others. I suppose it can be a matter of degrees, with one religion or individual being more open minded than the other, but I don't really know of any objective way to measure that sort of thing. I know that there are some believers who aren't troubled by such questions and proclaim their beliefs in blind faith, but I don't count myself among them, nor do I think it is something that is inherent in religion (perhaps it is inherent in some religions, but even then, religion does not exist in a vacuum and must be reconciled with the rest of the world).
Part of my trouble with this may be that I seem to have the ability to switch mental models rather easily, viewing a problem from a number of different perspectives and attempting to figure out the best way to approach a problem. I seem to be able to reconcile my various perspectives with each other as well (for example, I seem to have no problem reconciling science and religion with each other), though the boundries are blurry and I can sometimes come up with contradictory conclusions. This is in itself somewhat troubling, but at the same time, it is also somwhat of an advantage that I can approach a problem in a number of different ways. The trick is knowing which approach to use for which problem; hardly an easy proposition. Furthermore, I gather that I am somewhat odd in this ability, at least among believers. I used to debate religion a lot on the internet, and after a time, many refused to think of me as a Catholic because I didn't seem to align with others' perception of what Catholics are. I always found that rather amusing, though I guess I can understand the sentiment.
Unlike Den Beste, I do harbor some doubt in my beliefs, mainly because I recognize them as beliefs. They are not facts and I must concede the idea that my beliefs are incorrect. Like all sets of beliefs, there is an aspect of my beliefs that is very troubling and uncomforting, and there is a price we all pay for believing what we believe. And yet, believe we must. If we required our beliefs to be facts in order to act, we would do nothing. The value we receive from our beliefs outweighs the price we pay, or so we hope...
I suppose this could be seen by Steven to be missing the forest for the trees, but the reason I posted it is because the issue of beliefs discussed above fits nicely with several recent posts I made under the guise of Superstition and Security Beliefs (and Heuristics). They might provide a little more detail on the way I think regarding these subjects.
Posted by Mark on June 11, 2004 at 12:09 AM .: link :.
Sunday, May 30, 2004
Heuristics of Combat
Otherwise known as Murphy's laws of Combat, most of which are derived from Murphy's more general law: "Anything that can go wrong, will go wrong." Soldiers often add to this what is called O'Neil's Law: "Murphy was an optimist."
War is, of course, a highly unstable and chaotic undertaking. Combat and preparation are beset on all sides by unanticipated problems, especially during the opening stages of combat, when all of the theoretical constructs, plans, and doctrines are put to the test. Infantrymen are a common victim of Murhpy's Law, and have thus codified their general observations in a list of Murphy's laws of Combat. Naturally, there are many variations of the list, but I'll only be referencing a few rules because I think they're a rather telling example of heuristics in use.
Most of the rules are concise and somewhat humorous (if it weren't for the subject matter) bits of wisdom such as "Incoming fire has the right of way," and though some are indeed factual, most are based on general observations or are meant to imply a heuristic. For instance:
Always keep in mind that your weapon was made by the lowest bidder.This is, of course, a fact: most of the time, weapons are made by the lowest bidder. And yet, there is an unmistakable conclusion that one is supposed to reach when reading this rule: your weapon won't always work the way it is supposed to. That is also true, but it is worth noting that one must still rely on their weapon. If a soldier refused to fight unless he had a perfect weapon, he would never fight! This is an example of a heuristic which one must be aware of, but which one must use with caution. Weapons must be used, after all.
Perfect plans aren'tThese laws refer to the difficulty in planning an action during the chaotic and unpredictable atmosphere of war. To go into battle without a plan is surely foolish, and yet, ironically, the plan rarely survives in tact (interestingly, these laws which indicate a failure of one heuristic, the necessity of planning, have become another: don't blindly follow the plan, especially when events don't conform to the plan). The ability to adapt and improvise is thus a treasured characteristic in a soldier.
I recently watched a few episodes of the excellent Band of Brothers series, and in one episode, a group of US soldiers assault a German artillery battery. Lieutenant Winters, the man planning the attack, instructs Sergeant�Lipton that he'll need TNT the moment his group reaches the first gun (so they can blow it up).
Of course, it doesn't quite go as planned, and Lipton is held up crossing the battlefield. Winters improvises, using what he has available (another soldier had some TNT, but no way to detonate it, so they used a German grenade they found in the nest). Once Lipton finally reaches Winters with the TNT, Winters simply points to the busted gun, illustrating the the plan has not survived.
A couple of times above, I've said that something might be funny, if it wasn't about war, which was a point I sort of made in my earlier post:
When you're me, rooting for a sports team or betting modest amounts of money on a race, failure doesn't mean much. In other situations, however, failure is not so benign. Yet, despite the repercussions, failure is still inevitable and necessary in these situations. In the case of war, for instance, this can be indeed difficult and heartbreaking, but no less necessary.When planning a war, it is necessary to rely on heuristics because you may not have all the information you need or the information you have might not be as accurate as you think. Unfortunately, there is no real way around this. Soldiers are forced to make decisions without all the facts, and must rely on imprerfect techniques to do so. It is a simple fact of life, and we would do well to consider these sorts of things when viewing battles from afar. For while it may seem like a war that exhibits such chaos and unpredictabilty is a failure, such is not really the case. In closing, I'll leave you with yet another law of combat, one I find particularly fitting:
If it's stupid but works, it's not stupid.
Posted by Mark on May 30, 2004 at 06:30 PM .: link :.
Last week, I wrote about superstition, inspired by an Isaac Asimov article called "Knock Plastic!" In revisiting that essay, I find that Asimov has collected 6 broad examples of what he calls "Security Beliefs" They are called this because such beliefs are "so comforting and so productive of feelings of security" that all men employ them from time to time. Here they are:
Last week, I also referenced this: "It seems that our brains are constantly formulating alternatives, and then rejecting most of them at the last instant." What process do we use to reject the alternatives and eventually select the winner? I'd like to think it was something logical and rational, but that strikes me as something of a security belief in itself (or perhaps just a demonstration of Asimov's 5th security belief).
When we refer to logic, we are usually referring to a definitive conclusion that can be inferred from the evidence at hand. Furthermore, this deductive process is highly objective and repeatable, meaning that multiple people working under the same rules with the same evidence should all get the same (correct) answer. Obviously, this is a very valuable process; mathematics, for instance, is based on deductive logic.
However, there are limits to this kind of logic, and there are many situations in which it does not apply. For example, we are rarely in possession of all the evidence necessary to come to a logical conclusion. In such cases, decisions are often required, and we must fall back on some other form of reasoning. This is usually referred to as induction. This is usally based on some set of heuristics, or guidelines, which we have all been maintaining during the course of our lives. We produce this set of guidelines by extrapolating from our experiences, and by sharing our observations. Unlike deductive logic, it appears that this process is something that is innate, or at the very least, something that we are bred to do. It also appears that this process is very useful, as it allows us to operate in situations which we do not uderstand. We won't exactly know why we're acting the way we are, just that our past experience has shown that acting that way is good. It is almost a non-thinking process, and we all do it constantly.
The problem with this process is that it is inherently subjective and not always accurate. This process is extremely useful, but it doesn't invariably produce the desired results. Superstitions are actually heuristics, albeit generally false ones. But they arise because producing such explanations are a necessary part of our life. We cannot explain everything we see, and since we often need to act on what we see, we must rely on less than perfect heuristics and processes.
Like it or not, most of what we do is guided by these imperfect processes. Strangely, these non-thinking processes work exceedingly well; so much so that we are rarely inclined to think that there is anything "wrong" with our behavior. I recently stumbled upon this, by Dave Rodgers:
Most of the time, people have little real idea why they do the things they do. They just do them. Mostly the reasons why have to do with emotions and feelings, and little to nothing to do with logic or reason. Those emotions and feelings are the products of complex interactions between certain hardwired behaviors and perceptual receivers; a set of beliefs that are cognitively accessible, but most often function below the level of consciousness in conjunction with the more genetically fixed apparatus mentioned before; and certain habits of behavior which are also usually unconscious. ...Dave seems to think that the processes I'm referring to are "emotional" and "feeling" based but I am not sure that is so. Extrapolating from a set of heuristics doesn't seem like an emotional process to me, but at this point we reach a rather pedantic discussion of what "emotion" really is.
The point here is that our actions aren't always pefectly reasonable or rational, and that is not necessarily a bad thing. If we could not act unless we could reach a logical conclusion, we would do very little. We do things because they work, not necessarily because we reasoned that they would work before we did them. Afterwords, we justify our actions, and store away any learned heuristics for future use (or modify existing ones to account for the new data). Most of the time, this process works. However, these heuristics will fail from time to time as well. When you're me, rooting for a sports team or betting modest amounts of money on a race, failure doesn't mean much. In other situations, however, failure is not so benign. Yet, despite the repercussions, failure is still inevitable and necessary in these situations. In the case of war, for instance, this can be indeed difficult and heartbreaking, but no less necessary. [thanks to Jonathon Delacour for the Dave Rodgers post]
Posted by Mark on May 30, 2004 at 05:18 PM .: link :.
Sunday, May 23, 2004
On of my favorite anecdotes (probably apocryphal, as these things usually go) tells of a horseshoe that hung on the wall over Niels Bohr's desk. One day, an exasperated visitor could not help asking, "Professor Bohr, you are one of the world's greatest scientists. Surely you cannot believe that object will bring you good luck." "Of course not," Bohr replied, "but I understand it brings you luck whether you believe or not."
I've had two occasions with which to be obsessively superstitious this weekend. The first was Saturday night's depressing Flyers game. Due to poorly planned family outing (thanks a lot Mike!), I missed the first period and a half of the game. During that time, the Flyers went down 2-0. As soon as I started watching, they scored a goal, much to my relief. But as the game grinded to a less than satisfactory close, I could not help but think, what if I had been watching for that first period?
Even as I thought that, though, I recognized how absurd and arrogant a thought like that is. As a fan, I obviously cannot participate in the game, but all fans like to believe they are a factor in the outcome of the game and will thus go to extreme superstitious lengths to ensure the team wins. That way, there is some sort of personal pride to be gained (or lost, in my case) from the team winning, even though there really isn't.
I spent the day today at the Belmont Racetrack, betting on the ponies. Longtime readers know that I have a soft spot for gambling, but that I don't do it very often nor do I ever really play for high stakes. One of the things I really enjoy is people watching, because some people go to amusing lengths to perform superstitious acts that will bring them that mystical win.
One of my friends informed me of his superstitious strategy today. His entire betting strategy dealt with the name of the horse. If the horse's name began with an "S" (i.e. Secretariat, Seabiscuit, etc...) it was bound to be good. He also made an impromptu decision that names which displayed alliteration (i.e. Seattle Slew, Barton Bank, etc...) were also more likely to win. So today, when he spied "Seaside Salute" in the program, which exhibited both alliteration and the letter "S", he decided it was a shoe-in! Of course, he only bet it to win, and it placed, thus he got screwed out of a modest amount of money.
Like I should talk. My entire betting strategy revolves around John R. Velazquez, the best jockey in the history of horse racing. This superstition did not begin with me, as several friends discovered this guy a few years ago, but it has been passed on and I cannot help but believe in the power of JRV. When I bet on him, I tend to win. When I bet against him, he tends to be riding the horse that screws me over. As a result, I need to seriously consider the consequences of crossing JRV whenever I choose to bet on someone else.
Now, if I were to collect historical data regarding my bets for or against JRV (which is admittedly a very small data set, and thus not terribly conclusive either way, but stay with me here) I wouldn't be surprised to find that my beliefs are unwarranted. But that is the way of the superstition - no amount of logic or evidence is strong enough to be seriously considered (while any supporting evidence is, of course, trumpeted with glee).
Now, I don't believe for a second that watching the Flyers makes them play better, nor do I believe that betting on (or against) John R. Velazquez will increase (or decrease) my chances of winning. But I still think those things... after all, what could I lose?
This could be a manifestation of a few different things. It could be a relatively benign "security belief" (or "pleasing falsehood" as some like to call it - I'm sure there are tons of names for it) which, as long as you realize what you're dealing with can actually be fun (as my obsession with JRV is). It could also be brought on by what Steven Den Beste calls the High cliff syndrome.
It seems that our brains are constantly formulating alternatives, and then rejecting most of them at the last instant. ... All of us have had the experience of thinking something which almost immediately horrified us, "Why would I think such a thing?" I call it "High cliff syndrome".It seems to be one of the profound truths of human existence that we can conceive of impossible situations that we know will never be possible. None of us are immune, from one of the great scientific minds of our time to the lowliest casino hound. This essay was, in fact, inspired by an Isaac Asimov essay called "Knock Plastic!" (as published in Magic) in which Asimov confesses his habitual knocking of wood (of course, he became a little worried over the fact that natural wood was being used less and less in ordinary construction... until, of course, someone introduced him to the joys of knocking on plastic). The insights driven by such superstitious "security beliefs" must indeed be kept into perspective, but that includes realizing that we all think these things and that sometimes, it really can't hurt to indulge in a superstition.
Update: More on Security Beliefs here.
Posted by Mark on May 23, 2004 at 09:32 PM .: link :.
Thursday, May 20, 2004
Let's Go Flyers!
I don't write about hockey much, but since my Flyers decided to make tonight interesting with their overtime goal in a must-win game, I figured I was due. I've never really played hockey, so I can't say as though I have a true understanding of the game, but I can follow it well and even though NHL 2004 has eaten my soul, those EA Sports games have always helped me understand the real game better. Fortunately for me, Colby Cosh has been writing really solid stuff on his 2004 NHL Playoffs page. He actually hasn't posted there for a while (no round 3 notes, it seems), but what's there is still worth reading. Here he describes the epic overtime victory by the Flyers over the Maple Leafs, ending the second round of the playoffs:
I have to say that the Toronto Maple Leafs--in dying--made up for 13 games' worth of intermittently lackluster play in the seven minutes of overtime against Philadelphia Tuesday night. If I had to show a foreigner a short piece of hockey footage to help him understand the excitement this game can create, I'd show him that OT. It wasn't just the way things ended, although that right there is a story for the grandkids. Even before the all-century finish, the seven minutes were full of odd-man rushes, wildly bouncing pucks, great saves by Robert Esche followed by heart-stopping rebounds, and other terrific hits.Tonights playoffs had a similarly exciting feel to it, though perhaps not quite as spectacular as there wasn't as much freewheeling back-and-forth play (but since most of the play included the Flyers in the offensive zone, it was damn exciting for me:P) With any luck, the Flyers will be able to harness that momentum for game 7 and then head for the Stanley Cup.
If the Flyers can pull this off, I think we'll be in for a spectacular Stanley Cup finals. Both Keith Primeau and Jarome Iginla have been obscenely dominant clutch players during the playoffs, and they're both really nice guys. It should make for a great series. But first things first. The Flyers need to win game 7 in Tampa on Saturday! Go Flyers!
Posted by Mark on May 20, 2004 at 11:21 PM .: link :.
Sunday, May 02, 2004
The Unglamorous March of Technology
We live in a truly wondrous world. The technological advances over just the past 100 years are astounding, but, in their own way, they're also absurd and even somewhat misleading, especially when you consider how these advances are discovered. More often than not, we stumble onto something profound by dumb luck or by brute force. When you look at how a major technological feat was accomplished, you'd be surprised by how unglamorous it really is. That doesn't make the discovery any less important or impressive, but we often take the results of such discoveries for granted.
For instance, how was Pi originally calculated? Chris Wenham provides a brief history:
So according to the Bible it's an even 3. The Egyptians thought it was 3.16 in 1650 B.C.. Ptolemy figured it was 3.1416 in 150 AD. And on the other side of the world, probably oblivious to Ptolemy's work, Zu Chongzhi calculated it to 355/113. In Bagdad, circa 800 AD, al-Khwarizmi agreed with Ptolemy; 3.1416 it was, until James Gregory begged to differ in the late 1600s.π is an important number and being able to figure out what it is has played a significant factor in the advance of technology. While all of these numbers are pretty much the same (to varying degrees of precision), isn't it absurd that someone figured out π by dropping 34,000 pins on a grid? We take π for granted today; we don't have to go about finding the value of π, we just use it in our calculations.
In Quicksilver, Neal Stephenson portrays several experiments performed by some of the greatest minds in history, and many of the things they did struck me as especially unglamorous. Most would point to the dog and bellows scene as a prime example of how unglamorous the unprecedented age of discovery recounted in the book really was (and they'd be right), but I'll choose something more mundane (page 141 in my edition):
"Help me measure out three hundred feet of thread," Hooke said, no longer amused.And, of course, the experiment was a failure. Why? The scale was not precise enough! The book is filled with similar such experiments, some successful, some not.
Another example is telephones. Pick one up, enter a few numbers on the keypad and voila! you're talking to someone halfway across the world. Pretty neat, right? But how does that system work, behind the scenes? Take a look at the photo on the right. This is a typical intersection in a typical American city, and it is absolutely absurd. Look at all those wires! Intersections like that are all over the world, which is the part of the reason I can pick up my phone and talk to someone so far away. One other part of the reason I can do that is that almost everyone has a phone. And yet, this system is perceived to be elegant.
Of course, the telephone system has grown over the years, and what we have now is elegant compared to what we used to have:
The engineers who collectively designed the beginnings of the modern phone system in the 1940's and 1950's only had mechanical technologies to work with. Vacuum tubes were too expensive and too unreliable to use in large numbers, so pretty much everything had to be done with physical switches. Their solution to the problem of "direct dial" with the old rotary phones was quite clever, actually, but by modern standards was also terribly crude; it was big, it was loud, it was expensive and used a lot of power and worst of all it didn't really scale well. (A crossbar is an N� solution.) ... The reason the phone system handles the modern load is that the modern telephone switch bears no resemblance whatever to those of 1950's. Except for things like hard disks, they contain no moving parts, because they're implemented entirely in digital electronics.So we've managed to get rid of all the moving parts and make things run more smoothly and reliably, but isn't it still an absurd system? It is, but we don't really stop to think about it. Why? Because we've hidden the vast and complex backend of the phone system behind innocuous looking telephone numbers. All we need to know to use a telephone is how to operate it (i.e. how to punch in numbers) and what number we want to call. Wenham explains, in a different essay:
The numbers seem pretty simple in design, having an area code, exchange code and four digit number. The area code for Manhattan is 212, Queens is 718, Nassau County is 516, Suffolk County is 631 and so-on. Now let's pretend it's my job to build the phone routing system for Emergency 911 service in the New York City area, and I have to route incoming calls to the correct police department. At first it seems like I could use the area and exchange codes to figure out where someone's coming from, but there's a problem with that: cell phone owners can buy a phone in Manhattan and get a 212 number, and yet use it in Queens. If someone uses their cell phone to report an accident in Queens, then the Manhattan police department will waste precious time transferring the call.He also mentions cell phones, which are somewhat less absurd than plain old telephones, but when you think about it, all we've done with cell phones is abstract the telephone lines. We're still connecting to a cell tower (which need to be placed with high frequency throughout the world) and from there, a call is often routed through the plain old telephone system. If we could see the RF layer in action, we'd be astounded; it would make the telephone wires look organized and downright pleasant by comparison.
The act of hiding the physical nature of a system behind an abstraction is very common, but it turns out that all major abstractions are leaky. But all leaks in an abstraction, to some degree, are useful.
One of the most glamorous technological advances of the past 50 years was the advent of space travel. Thinking of the heavens is indeed an awe-inspiring and humbling experience, to be sure, but when you start breaking things down to the point where we can put a man in space, things get very dicey indeed. When it comes to space travel, there is no more glamorous a person than the astronaut, but again, how does one become an astronaut? The need to pour through and memorize giant telephone-sized books filled with technical specifications and detailed schematics. Hardly a glamorous proposition.
Steven Den Beste recently wrote a series of articles concerning the critical characteristics of space warships, and it is fascinating reading, but one of the things that struck me about the whole concept was just how unglamorous space battles would be. It sounds like a battle using the weapons and defenses described would be punctuated by long periods of waiting followed by a short burst of activity in which one side was completely disabled. This is, perhaps, the reason so many science fiction movies and books seem to flaunt the rules of physics. As a side note, I think a spectacular film could be made while still obeying the rules of physics, but that is only because we're so used to the absurd physics defying space battles.
None of this is to say that technological advances aren't worthwhile or that those who discover new and exciting concepts are somehow not impressive. If anything, I'm more impressed at what we've achieved over the years. And yet, since we take these advances for granted, we marginalize the effort that went into their discovery. This is due in part to the necessary abstractions we make to implement various systems. But when abstractions hide the crude underpinnings of technology, we see that technology and its creation as glamorous, thus bestowing honors upon those who make the discovery (perhaps for the wrong reasons). It's an almost paradoxal cycle. Perhaps because of this, we expect newer discoveries and innovations to somehow be less crude, but we must realize that all of our discoveries are inherently crude.
And while we've discovered a lot, it is still crude and could use improvements. Some technologies have stayed the same for thousands of years. Look at toilet paper. For all of our wondrous technological advances, we're still wiping our ass with a piece of paper. The Japanese have the most advanced toilets in the world, but they've still not figured out a way to bypass the simple toilet paper (or, at least, abstract the process). We've got our work cut out for us. Luckily, we're willing to go to absurd lengths to achieve our goals.
Posted by Mark on May 02, 2004 at 09:47 PM .: link :.
Sunday, April 25, 2004
Iraqi Ghosts, Puritans, and Geeks
Just a few interesting things I've stumbled across recently:
Posted by Mark on April 25, 2004 at 11:14 AM .: link :.
Sunday, April 18, 2004
Sorry for the lack of updates recently. I've been exceedingly busy lately, with no end in sight. And since my chain-smoking monkey research staff, emboldened by the Simpsons voice talent, have gone on strike, I don't have a whole lot of stuff to even point to. However, I'd like to make a few quick updates to some recent posts:
Posted by Mark on April 18, 2004 at 12:46 PM .: link :.
Sunday, March 21, 2004
Inherently Funny Words, Humor, and Howard Stern
Here's a question: Which of the following words is most inherently funny?
Words with a 'k' in it are funny. Alkaseltzer is funny. Chicken is funny. Pickle is funny. All with a 'k'. 'L's are not funny. 'M's are not funny. Cupcake is funny. Tomatoes is not funny. Lettuce is not funny. Cucumber's funny. Cab is funny. Cockroach is funny -- not if you get 'em, only if you say 'em.Well, that is certainly a start, but it doesn't really tell the whole story. Words with an "oo" sound are also often funny, especially when used in reference to bodily functions (as in poop, doody, booger, boobies, etc...) In fact, bodily functions are just plain funny. Witness fart.
Of course, ultimately it's a subjective thing. To me, boobies are funnier than breasts, even though they mean the same thing. To you, perhaps not. It's the great mystery of humor, and one of the most beautiful things about laughter is that it happens involuntarily. We don't (always) have to think about it, we just do it. Here's a quote from Dennis Miller to illustrate the point:
The truth is the human sense of humor tends to be barbaric and it has been that way all along. I'm sure on the eve of the nativity when the tall Magi smacked his forehead on the crossbeam while entering the stable, Joseph took a second away from pondering who impregnated his wife and laughed his little carpenter ass off. A sense of humor is exactly that: a sense. Not a fact, not etched in stone, not an empirical math equation but just what the word intones: a sense of what you find funny. And obviously, everybody has a different sense of what's funny. If you need confirmation on that I would remind you that Saved by the Bell recently celebrated the taping of their 100th episode. Oh well, one man's Molier is another man's Screech and you know something thats the way it should be.There has been a lot of controversy recently about the FCC's proposed fines against Howard Stern (which may have been temporarily postponed). Stern has been fined many times before, including "$600,000 after Stern discussed masturbating to a picture of Aunt Jemima." Stern, of course, has flown off the handle at the prospect of new fines. Personally, I think he's overreacting a bit by connecting the whole thing with Bush and the religious right, but part of the reason he is so successful is that his overreaction isn't totally uncalled for. At the core of his argument is a serious concern about censorship, and a worry about the FCC abusing it's authority.
On the other hand, some people don't see what all the fuss is about. What's wrong with having a standard for the public airwaves that broacasters must live up to? Well, in theory, nothing. I'm not wild about the idea, but there are things I can understand people not wanting to be broadcast over public airwaves. The problem here is what is acceptible.
Just what is the standard? Sure, you've got the 7 dirty words, that's easy enough, but how do you define decency? The fines proposed against Stern are supposedly from a 3 year old broadcast. Does that sound right to you? Recently Stern wanted to do a game in which the loser had to let someone fart in their face. Now, I can understand some people thinking that's not very nice, but does that qualify as "indecent"? Apparently, it might, and Stern was not allowed to proceed with the game (he was given the option to place the looser in a small booth, and then have someone fart in the booth). Would it actually have resulted in a fine? Who knows? And that is what the real problem with standards are. If you want to propose a standard, it has to be clear and you need to straddle a line between what is hurtful and what is simply disgusting or offensive. You may be upset at Stern's asking a Nigerian woman if she eats monkeys, but does that deserve a fine from the government? And how much? And is it really the job of the government to decide these sorts of things? In the free market, advertisers can choose (and have chose) not to advertise on Stern's program.
At the bottom of this post, Lawrence Theriot makes a good point about that:
Yes a lot of what Stern does could be considered indecent by a large portion of the population (which is the Supreme Court standard) but in this case it's important to consider WHERE those people might live and to what degree they are likely to be exposed to Stern's brand of humor before you decide that those people need federal protection from hearing his show. Or, in other words, might the market have already acted to protect those people in a very real way that makes Federal action unnecessary?In the end, I don't know the answer, but there is no easy solution here. I can see why people want standards, but standards can be quite impractical. On the other hand, I can see why Stern is so irate at the prospect of being fined for something he said 3 years ago - and also never knowing if what he's going to say qualifies as "indecent" (and not really being able to take such a thing to court to really decide). Dennis Miller again:
We should question it all; poke fun at it all; piss off on it all; rail against it all; and most importantly, for Christ's sake, laugh at it all. Because the only thing separating holy writ from complete bullshit is your perspective. Its your only weapon. Keep the safety off. Don't take yourself too seriously.In the end, Stern makes a whole lot of people laugh and he doesn't take himself all that serious. Personally, I don't want to fine him for that, but if you do, you need to come up with a standard that makes sense and is clear and practical to implement. I get the feeling this wouldn't be an issue if he was clearly right or clearly wrong...
Posted by Mark on March 21, 2004 at 09:04 PM .: link :.
Thursday, March 18, 2004
Elephants and the Media
I've been steadily knocking off films from my 2003 Should Have Seem Em list. Among the films recently viewed was Gus Van Sant's striking Elephant. The film portrays the massacre at an ordinary high school much like Columbine (I originally thought it was Columbine, and the similarities are numerous, but apparently not). It simply shows the events as they unfold, from the ordinary morning to the massacre that follows. There is no explanation, no preaching about the ills of modern society, no empty solutions proffered. It is the events of one day, as seen by a number of people, laid bare. Van Sant employs the use of a series of long tracking shots, following this person or that, to lend an air of detached documentary to the film, and it works. This lack of sensationalism was a bold move, but I think the correct one, and it's the only way a movie about such a thing could possibly be relevant. Van Sant has said of this film: "I want the audience to make its own observations and draw its own conclusions," and I think he has succeeded admirably.
Roger Ebert wrote an excellent review of the movie, and in it, he comments:
Let me tell you a story. The day after Columbine, I was interviewed for the Tom Brokaw news program. The reporter had been assigned a theory and was seeking sound bites to support it. "Wouldn't you say," she asked, "that killings like this are influenced by violent movies?" No, I said, I wouldn't say that. "But what about 'Basketball Diaries'?" she asked. "Doesn't that have a scene of a boy walking into a school with a machine gun?" The obscure 1995 Leonardo Di Caprio movie did indeed have a brief fantasy scene of that nature, I said, but the movie failed at the box office (it grossed only $2.5 million), and it's unlikely the Columbine killers saw it.Ouch. The entire review is good, so check it out.
Posted by Mark on March 18, 2004 at 08:56 PM .: link :.
Sunday, March 07, 2004
Thanks to Chris Wenham's short story Clear as mud, I've been craving a good science fiction novel. So I started reading Ender's Game by Orson Scott Card. It's an excellent book, and though I have not yet finished the book, Card makes a lot of interesting choices. For those interested, there will be spoilers ahead.
The story takes place in the distant future where aliens have attacked earth twice, almost destroying the human race. To prepare for their next encounter with the aliens, humans band together under a world government and go about breeding military geniuses, and training them. The military pits students against each other in a series of warlike "games." Andrew "Ender" Wiggin is one such genius, but his abilities are far and above everyone else. This is in part due to his natural talent, but it is also due to certain personality traits: curiosity, an analytical thought process, and humility (among others).
The following passage takes place just after Ender commands his new army to a spectacular victory in just his first match as commander. It was such a spectacular victory, in fact, that Ender becomes a subject of ire amongst the other commanders.
Carn Carby made a point of coming to greet Ender before the lunch period ended. It was, again, a gracious gesture, and, unlike Dink, Carby did not seem wary. "Right now I'm in disgrace," he said frankly. "They won't believe me when I tell them you did things that nobody's ever seen before. So I hope you beat the snot out of the next army you fight. As a favor to me."One of the interesting things about Ender is that he's not perfect, and he freely admits it all the time. His humility is essential. Failure doesn't matter unless you learn from your failures (the ceramics parable is a recent example of this sort of thing). Ender doesn't fail much, but he's not afraid to confront the reality that someone might think of something he hasn't thought of. He relies on others to help him all the time. The passage above shows how much Ender values humility in his peers as well.
I don't know why Ender's humility surprised me, as Ender is, after all, only human. But it did. It's an interesting perspective, and I'm enjoying the book a lot. As I said, I haven't finished it yet, so for all I know, he becomes an arrogant and ignorant prick towards the end of the novel, but I doubt that. Ender's humility is integral to his success, as humility plays an important part in success. We'll need to keep this in mind, and point out failures we're making as they happen so that we can learn from them and apply those lessons. Naturally, everone will disagree with each other as to what constitutes a failure and what lessons must be learned from which actions, but criticism never bothers me unless it's of the mean spirited unproductive variety. In short, I take Lileks' Andre the Giant philosophy:
Look. I'm a big-tent kinda guy. I'm willing to embrace all sorts of folk whose agendas may differ from mine, as long as we share the realization that there are many many millions out there who want us stone-cold bleached-bones dead. It?s the Andre the Giant philosophy, expressed in "Princess Bride":Well, I hope we win.
Posted by Mark on March 07, 2004 at 08:57 PM .: link :.
Sunday, February 08, 2004
Last week, I wrote a biography for Dan Gable. Because the sport at which Gable excelled was wrestling, most have not heard of him, but within the sport he is a legend. That's him over there on the right, pictured with his Gold Medal from the 1972 Olympics (in which he went undefeated and, indeed, didn't give up a single point - much to the dismay of the Soviets, who had vowed to "scour the country" looking for someone to defeat Gable). His story is an interesting one, but one thing I'm not so sure I captured in my piece was just how obsessed with wrestling he was. He lived, ate, and drank wrestling. When asked what interests he has besides wrestling, the first thing he says is "Recovery" (of course, he has to be completely exhausted to partake in that activity). How he managed to start a family, I will never know (perhaps he wasn't quite as obsessed as I thought). It made me wonder if being that good at something was worth it...
There is an old saying "Jack of all trades, Master of none." This is indeed true, though with the demands of modern life, we are all expected to live in a constant state of partial attention and must resort to drastic measures like Self-Censorship or information filtering to deal with it all. This leads to an interesting corollary for the Master of a trade: They don't know how to do anything else!
I'm reminded of a story told by Isaac Asimov, in his essay Thinking about Thinking (which can be found in the Magic collection):
On a certain Sunday, something went wrong with my car and I was helpless. Fortunately, my younger brother, Stan, lived nearby and since he is notoriously goodhearted, I called him. He came at once, absorbed the situation, and began to use the Yellow Pages by the telephone to try to reach a service station, while I stood by with my lower jaw hanging loose. Finally, after a period of strenuous futility, Stan said to me with just a touch of annoyance, "With all your intelligence, Isaac, how is it you lack the brains to join the AAA?" Whereupon, I said, "Oh, I belong to the AAA," and produced the card. He gave me a long strange look and called the AAA. I was on my wheels in half an hour.He tells this story as part of a discussion on the nature of intelligence and how one is judged to be intelligent. Which brings up an interesting point, how does one even know they are master of a trade? Nowadays, there are few who know one trade so well that all others suffer; we're mostly jacks, to some degree. There are some who are special, who can focus all of their energy into a single pursuit with great success. These people are extraordinarily rare, and somewhat scary in that they can be so brilliant in one sphere, but so clueless in another, more prosaic, department. But that does not help us in diagnosing mastery of a trade.
When you really start to get into it, of course, the metaphor breaks down. Personally, I wouldn't consider myself a master of any trades, but neither would I judge myself a jack. There are several subjects at which I excell, but I can't seem to focus on any one of them - mostly because I like them all so much and I cannot bring myself to narrowly focus my efforts on a single subject. I have my moments of absent-mindedness too, though none quite so drastic as Asimov's amusing tale. But even if I did focus my efforts, how would I know when I've reached the point of mastery?
In the end, I don't think you can tell. Mastery is a worthwhile goal, even if you must sacrifice some of your favorite trades, but because we cannot tell when we've mastered a subject, the term really doesn't have much meaning. As Asimov implies in his aformentioned essay, the only really useful term is "different." It is this difference which is truly important, because what some of us cannot do, others can. This is the basis of society and civilization, and the reason we as humans have prospered as individuals.
And Just for fun, an Asimov Quote:
"Those people who think they know everything are a great annoyance to those of us who do." -Isaac AsimovDamn straight.
Update 2.15.04: John Weidner suggests "that when the time comes that we re-open diplomatic relations with Iran, Dan Gable should be our ambassador." He makes a note of how Iranians have previously greeted "The Great Satans' wrasslin' team" with enthusiasm (and a cool Neal Stephenson book).
Posted by Mark on February 08, 2004 at 04:17 PM .: link :.
Tuesday, January 20, 2004
I've noticed a trend in my writing, or, rather, the lack thereof. There are generally four venues in which I write, three of which are on the internet, and one of which is for my job. In the three internet venues, my production has started relatively high, and steadily decreased as time went on. (I suppose I should draw a distinction between writing and simple conversation. Email, for example, is not included as that does not represent the type of writing I'm talking about, though I do write a lot of email and email could possibly become a venue in the future.)
My job sometimes entails the writing of technical specifications for web applications, and this, at least, does not suffer from the same problem. It can be challenging at times, especially if I need to tailor them towards both a technical and non-technical audience, but for the most part it is a straightforward affair (it helps that they pay me too). Once I have all the information, resources, and approvals I need, the writing comes easy (well, I'm simplifying for the sake of discussion here, but you get the point).
This is in part because technical writing doesn't need to be compelling, which is where I stumble. It's also because collecting information and resources for this sort of thing is simpler and the information is easier to organize. I'm not especially articulate when it comes to expressing my thoughts and ideas. If I ever do it's only because I've spent an inordinate amount of time polishing the text (and if I don't, I'm in trouble, because I've spent an inordinate amount of time polishing the text). Hell, I tried to be organized and wrote a bit of an outline for this post, but I had trouble doing even that.
And, of course, I notice that I'm not following my outline either. But I digress.
The other three venues are my weblog (natch), Everything2, and various discussion forums.
This weblog has come a long way over the three and a half years since I started it, and at this point, it barely resembles what it used to be. I started out somewhat slowly, just to get an understanding of what this blogging thing was and how to work it (remember, this was almost four years ago and blogs weren't nearly as common as they are now), but I eventually worked up into posting about once a day, on average. At that time, a post consisted mainly of a link and maybe a summary or some short commentary. Then a funny thing happened, I noticed that my blog was identical to any number of other blogs, and thus wasn't very compelling. So I got serious about it, and started really seeking out new and unusual things. I tried to shift focus away from the beaten path and started to make more substantial contributions. I think I did well at this, but it couldn't really last. It was difficult to find the offbeat stuff, even as I poured through massive quantities of blogs, articles and other information (which caused problems of it's own). I slowed down, eventually falling into an extremely irregular posting schedule on the order of once a month, which I have since attempted to correct, with, I hope, some success. I recently noticed that I have been slumping somewhat, though I'm still technically keeping to my schedule.
During the period in which I wasn't posting much on the weblog, I was "noding" (as they call it) over at Everything2, which is a collaborative database project. There too, I started strong and have since petered out. However, similar to what happened in the weblog, the quality improved even as the quantity decreased. This is no coincidence. It takes longer to write a good node, so it makes sense that the quantity would be inversely proportional to the quality.
Of the three internet venues, discussion forums are the simplest as they are informal and require the least amount of vigor (and in that respect, they resemble email, but there is a small difference which we will come to in a bit). Even then, though, in certain forums I have noticed my production fall as well. These are predominantly debating forums where I was making some form of argument. What I found was that, as time went on, I tended to take the debates more seriously and thus I spent more time and effort on making sure my arguments were logically consistent and persuasive. And again, my posting at these forums has slowed considerably.
One other note about these three: it seems that at any given time, I am only significantly contributing to one of these three. When the blog posting slowed, I moved to E2, for example, and when that slowed down, I focused on the forums. Now that I've come back to the blog, the others have suffered. There are all sorts of reasons why writing slows that have nothing to do with the process of writing or choosing what to write, but I do think those things contribute as well.
In effect, this represents a form of self-censorship. I'm constantly evaluating ideas for inclusion in the weblog. Johnathon wrote about this a few weeks ago, and he put it well:
...having a weblog turns information overload into a two-way process: first you suck all this stuff into your head for processing; and then you regurgitate it as weblog posts. And, while this process isn't all that different from the ways in which we manipulate information in our jobs, it's something that we've chosen to do in addition to our jobs, something that detaches us even further from "real life". I suspect that the problem is compounded by the fact that weblog entries areoverwhelminglyexpressions of opinion and, to make it worse, many of the opinions are opinions about opinions on issues concerning which the opinionators have little, if any, firsthand knowledge or experience. Me included.As time goes on, my evaluation of what is blog-worthy has gotten more and more discriminating (as always, there are exceptions) and the quality has gone up. But, of course, the quantity has gone down.
Why? Why do I keep doing this? It is tempting to write it off as laziness, and that is no doubt part of it. It's not like it takes me a week to write a post or a node. At most, it takes a combined few hours.
Part of the problem is finding a few uninterrupted hours with which to compose something. In all of my writing endeavors, I've set the bar high enough that it requires too much time to do at once. When I didn't expect much out of myself on the blog or on E2, I could produce a lot more because the time required to do so was small enough that I could do so quickly and effectively. Back in the day, I could blog during my lunch break. I haven't been able to do that lately (as in, the past few years).
The natural solution to that is to split up writing sessions, and that is what I often do, but there are difficulties with that. First, it breaks concentration. Each writing session needs to start with several minutes of re-familiarizing with the subject. So even the sessions need to be reasonably large chunks of time. In addition, if these chunks are spread out too far, you run the risk of losing interest and motivation (and it takes longer to re-familiarize yourself too).
Motivation can be difficult to sustain, especially over long periods of time, which might also be the reason why I seem to rotate between the three internet venues.
There is an engineering proverb that says Fast, Good, Cheap - Pick two. The idea is that when you're tackling a project, you can't have all three. If you favor making a quality product in a short period of time, it is going to cost you. Similarly, if you need to do it on the cheap and also in a short period of time, you're not going to end up with a quality product. I think there might be some sort of corollary at work here, Quality, Quantity, Time - Pick Two. Meaning that if I want to write a high quality post in a relatively short period of time, the quantity will suffer. If I want a high quantity of posts that are also of a high quality, then it will take up a lot of my time. And so on...
This post was prompted by something Dave Rogers wrote a while back:
I find I have less to say about things these days. Often I feel the familiar urge to say something, but now I'm as likely to keep quiet as I am to speak up. This bothers me a little, because I've always felt it was important to speak up when you felt strongly about something. Now I'm not so sure about that.Despite all that I've said so far, I actually have been writing here for quite some time. Sure, I swap venues or slow down sometimes, but I have kept a relatively steady pace among them in the past few years. Dave's post made me wonder about why I want to write and what kept me writing. There are plenty of reasons, but one of the most important is that I am usually writing about things I don't know very well... and I learn from the experience. Blogging originally taught me to seek out and find things off the beaten path, Everything2 gave me an excuse to research various subjects and write about them (most of what I write there are called "factuals" - sort of like writing an encyclopedia entry), and the forums forced me to form an opinion and let it stand up to critical testing. I'm not exactly sure what it is I'm learning right now, but I'm enjoying myself.
Posted by Mark on January 20, 2004 at 08:31 PM .: link :.
Sunday, January 18, 2004
To the Moon!
President Bush has laid out his vision for space exploration. Reaction has mostly been lukewarm. Naturally, there are opponents and proponents, but in my mind it is a good start. That we've changed focus to include long term manned missions on the Moon and a mission to Mars is a bold enough move for now. What is difficult is that this is a program that will span several decades... and several administrations. There will be competition and distractions. To send someone to Mars on the schedule Bush has set requires a consistent will among the American electorate as well. However, given the technology currently available, it might prove to be a wise move.
A few months ago, in writing about the death of the Galileo probe, I examined the future of manned space flight and drew a historical analogy with the pyramids. I wrote:
Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.We should, and I'm glad we're orienting ourselves in this direction. Bush's plan appeals to me because of it's pragmatism. It doesn't seek to simply fly to Mars, it seeks to leverage the Moon first. We've already been to the Moon, but it still holds much value as a destination in itself as well as a testing ground and possibly even a base from which to launch or at least support our Mars mission. Some, however, see the financial side of things a little too pragmatic:
In its financial aspects, the Bush plan also is pragmatic -- indeed, too much so. The president's proposal would increase NASA's budget very modestly in the near term, pushing more expensive tasks into the future. This approach may avoid an immediate political backlash. But it also limits the prospects for near-term technological progress. Moreover, it gives little assurance that the moon-Mars program will survive the longer haul, amid changing administrations, economic fluctuations, and competition from voracious entitlement programs.There's that problem of keeping everyone interested and happy in the long run again, but I'm not so sure we should be too worried... yet. Wretchard draws an important distinction, we've laid out a plan to voyage to Mars - not a plan to develop the technology to do so. Efforts will be proceeding on the basis of current technology, but as Wretchard also notes in a different post, current technology may be unsuitable for the task:
Current launch costs are on the order of $8,000/lb, a number that will have to be reduced by a factor of ten for the habitation of the moon, the establishment of La Grange transfer stations or flights to Mars to be feasible. This will require technology, and perhaps even basic physics that does not even exist. Simply building bigger versions of the Saturn V will not work. That would be "like trying to upgrade Columbus?s Nina, Pinta, and Santa Maria with wings to speed up the Atlantic crossing time. A jet airliner is not a better sailing ship. It is a different thing entirely." The dream of settling Mars must await an unforseen development.Naturally, the unforseen development is notoriously tricky, and while we must pursue alternate forms of propulsion, it would be unwise to hold off on the voyage until this development occurs. We must strike a delicate balance between the concentration on the goal and the means to achieve that goal. As Wretchard notes, this is largely dependant on timing. What is also important here is that we are able to recognize this development when it happens and that we leave our program agile enough to react effectively to this development.
Recognizing this development will prove interesting. At what point does a technology become mature enough to use for something this important? This may be relatively straightforward, but it is possible that we could jump the gun and proceed too early (or, conversely, wait too long). Once recognized, we need to be agile, by which I mean that we must develop the capacity to seamlessly adapt the current program to exploit this new development. This will prove challenging, and will no doubt require a massive increase in funding, as it will also require a certain amount of institutional agility - moving people and resources to where we need them, when we need them. Once we recognize our opportunity, we must pounce without hesitation.
It is a bold and challenging, yet judiciously pragmatic, vision that Bush has laid out, but this is only the first step. The truly important challenges are still a few years off. What is important is that we recognize and exploit any technological advances on our way to Mars, and we can only do so if we are agile enough to effectively react. Exploration of the frontiers is a part of my country's identity, and it is nice to see us proceeding along these lines again. Like the Egyptians so long ago, this mammoth project may indeed inspire a unity amongst our people. In these troubled times, that would be a welcome development. Though Europe, Japan, and China have also shown interest in such an endeavor, I, along with James Lileks, like the idea of an American being the first man on Mars:
When I think of an American astronaut on Mars, I can't imagine a face for the event. I can tell you who staffed the Apollo program, because they were drawn from a specific stratum of American life. But things have changed. Who knows who we'd send to Mars? Black pilot? White astrophysicist? A navigator whose parents came over from India in 1972? Asian female doctor? If we all saw a bulky person bounce out of the landing craft and plant the flag, we'd see that wide blank mirrored visor. Sex or creed or skin hue - we'd have no idea.Indeed.
Update 1.21.04: More here.
Posted by Mark on January 18, 2004 at 05:16 PM .: link :.
Sunday, December 28, 2003
On the Overloading of Information
Jonathon Delacour asks a poignant question:
who else feels overwhelmed by the volume of information we expect ourselves to absorb and process every day? And how do you manage to deal with it?Judging from the comments, his post has obviously struck a chord with his readers, myself included. I am once again reminded of Neal Stephenson's original minimalist homepage in which he speaks of his ongoing struggle against what Linda Stone termed as "continuous partial attention," for that is the way in which modern life must be for a great deal of us.
I am often overwhelmed by a desire to consume various things - books, movies, music, etc... The subject of such things is also varied and, as such, often don't mix very well. That said, the only thing I have really found that works is to align those subjects that do mix in such a way that they overlap. This is perhaps the only reason blogging has stayed on my plate for so long: since the medium is so free-form and since I have absolute control over what I write here and when I write it, it is easy to align my interests in such a way that they overlap with my blog (i.e. I write about what interests me at the time). I have been doing so for almost three and a half years, more or less, and the blog as it now exists barely resembles what it once did. This is, in part, because my interests have shifted during that time. There was a period of about a year in which blogging was very sparse indeed, but before I tackle that, I wish to backtrack a bit.
As I mentioned, this subject has struck a chord with a great deal of people, and the most common suggestion for how to deal with such a quandry is a form of information filtering. Usually this takes the form of a rather extreme and harsh filtering system - namely removing one source of information entirely. Delacour speaks of a friend who only recently bought a television and vcr, and even then he only did so so that his daughters could watch videos a few times a week. The complete removal of one source of information seems awfully drastic to me, though I suppose I've done so from time to time. For about a year, I had not bought or sought out any new music, only recently emerging from this out of boredom. It was a conscious decision to remove music from my sphere of learning, though I continued to listen to and very much enjoy music. I simply didn't understand music the way I understood film or literature (inasmuch as I understand either of those) and didn't want to burden myself overinterpreting yet another medium. Even as it stands now, I'm not too concerned over what I'm listening too, as long as it keeps my attention during a rather long commute.
Some time ago, I used to blog a lot more often than I do now. And more than that, I used to read a great deal of blogs, especially new blogs (or at least blogs that were new to me). Eventually this had the effect of inducing a sort of ADD in me. I consumed way too many things way too quickly and I became very judgemental and dismissive. There were so many blogs that I scanned (I couldn't actually read them, that would take too long for marginal gain) that this ADD began to spread across my life. I could no longer sit down and just read a book, even a novel.
Eventually, I recognized this, took a bit of a break from blogging, and attempted to correct, with some success. I have since returned to blogging, albeit at a slower pace, and have taken measures against falling into that same trap, though only with limited success. I have come to the conclusion that I can only do one major internet endeavor at a time. During the period of slow blogging, I turned my attention towards Everything 2 (a sort of online collaborative encyclopedia), but I have found that as I returned to blogging, I could not find time for E2, unless they somehow overlapped (as they do, from time to time). Likewise, I cannot devote much time to discussion of various subjects at various forums if I am blogging or noding (as posting at E2 is called). Delacour's description of his own quandry is somewhat accurate in my case as well:
Self-employment, a constant Internet connection, a weblog, and a mildly addictive personality turn out to be a killer combination-even for someone who no longer feels compelled to post regularly, let alone every day.So the short answer to Delacour's question of how do people deal with information overload is of course filtering. It is the manner and degree to which we filter that is important. And of course it must be said that any filtering system which you set up must be dynamic - it must change as you change and the world changes. It is a challenge to find the right balance, and it is also a challenge to keep that balance.
An interesting post-script to this is that I ran across Delacour's post several weeks ago, and am only coming to post about it today. Make of that what you will.
In any case, I'd like to turn my attention to another of Delacour's posts, titled I'll link to whoever he's linking to, in which he talks a lot about what drives people to link other blogs on their blog. It is an exceptional analysis and well worth reading in it's entirety. At one point, he points to "six principles of persuasion" (as defined by a Psychology professor in the context of cult recruitment) and applies those principles to weblogs and blogrolls with some success. This has prompted some thought on my part, and I have decided to update the blogroll. As you might guess, a number of the six principles of persuasion are at work in my blogroll, but I would note that the most accurate in my case are "liking" (as in, the reason all of those links are there is because I like them and read them regularly - indeed, it is almost there out of a pragmatic want of having the most common sites I visit linked from one place) and "Commitment and Consistency." By far the least important is the "Social Proof" principle which states that "In a given situation, our view of whether a particular behavior is correct or not is directly proportional to the number of other people we see performing that behaviour" or, applied to blogs, "If all those other people have X on their blogrolls, then he definitely should be on my blogroll."
In fact, I had updated the blogroll somewhat recently already. One of the blogs I added then was the Belmont Club, which has enjoyed a certain amount of noteriety lately, thanks in part to Steven Den Beste (who, interestingly enough, had promted Delacour's post about linking in the first place). So Belmont Club went from a relatively obscure excellent blog to a blog that is well known and now highly linked to. Believe it or not, this has weighed unfavorably upon my decision to keep Belmont Club on the blogroll. I have opted to do so for now because my "liking" that blog far outweighs my distaste for "social proof." In any case, the blogroll will be updated shortly, with but a few new blogs...
I find both of these subjects (information overload and linking) to be interesting, so I may spend some time later this week hashing out a little more about both subjects... or perhaps not - perhaps some other interest will gain favor in my court. We shall see, I suppose.
Posted by Mark on December 28, 2003 at 11:17 AM .: link :.
Wednesday, December 03, 2003
Is the Christmas Tree Christian?
The Winter Solstice occurs when your hemisphere is leaning farthest away from the sun (because of the tilted axis of the earth's rotation), and thus this is the time of the year when daylight is the shortest and the sun has its lowest arc in the sky.
No one is really sure when exactly it happened (or who started the idea), but this period of time eventually took on an obvious symbolic meaning to human beings. Many geographically diverse cultures throughout history have recognized the winter solstice is as a turning point, a return of the sun. Solstice celebrations and ceremonies were common, sometimes performed out of a fear that the failing light of the sun would never return unless humans demonstrated their worth through celebration or vigil.
It has been claimed that the Mesopotamians were among the first to celebrate the winter solstice with a 12 day festival of renewal, designed to help the god Marduk tame the monsters of chaos for one more year. Other theories go as far back as 10,000 years. More recently, the Romans celebrated the winter solstice with a fest called Saturnalia in honor of Saturn, the god of agriculture.
Integral to many of these celebrations were plants and trees that remained green all year. Evergreens reminded them of all the green plants that would grow again when the sun returned; they symbolized the solstice and the triumph of life over death.
In the early days of Christianity, the birth of Christ was not celebrated (instead Easter, was and possibly still is the main holiday of Christianity). In the fourth century, the Church decided to make the birth of Christ a holiday to be celebrated. There was only one problem - the Bible makes no mention of when Christ was born. Although there was some evidence to draw from, the Church chose to celebrate Christmas on December 25. It is believed that this date was chosen to coincide with traditional winter solstice festivals such as the Roman pagan Saturnalia festival in the hopes that Christmas would be more popularly embraced by the people of the world. And embraced it was, but the Church found that as the holiday spread, their choice to hold Christmas at the same time as solstice celebrations did not allow the Church to dictate how the holiday was celebrated. And so many of the pagan traditions of the solstice survived during the next millenia, even though pagan religions had largely given way to Christianity.
And so the importance of evergreens in these celebrations continued. The use of the Christmas tree, as we now know it, is generally credited to sixteenth century Germans, specifically the Protestant-reformer Martin Luther, who is thought to be the first to added lighted candles to a tree.
While the Germans found a certain significance in the pagan traditions concerning evergreens, it was not a universally held belief. For instance, the Christmas tree did not gain traction in America until the mid-nineteenth century. Up until then, they were generally seen as pagan symbols and mocked by New England Puritans. But the tradition gained traction thanks to German settlers in Pennsylvania (among others) and increasing secularization of the holiday in America. In the past century, the Christmas tree has gained in popularity, as more and more people adopted the traditon of displaying a decorated evergreen in their home. After all this time, Christmas trees have become an American tradition.
There has been a lot of controversy lately concerning the presence (or, I suppose, the removal and thus absence) of Christmas trees in schools. Personally, I don't see what is so controversial about it, as a Christmas tree is more of a secular, rather than religious, symbol. Joshua Claybourn quotes the Supreme Court thusly:
"The Christmas tree, unlike the menorah, is not itself a religious symbol. Although Christmas trees once carried religious connotations, today they typify the secular celebration of Christmas." Allegheny v. American Civil Liberties Union Greater Pittsburgh Chapter, 492 U.S. 573, 109 S.Ct. 3086.It does not represent a religious idea, but rather the idea of renewal that accompanied the winter solstice. One can associate Christian ideas with the tree, as Martin Luther did so long ago, but that does not make it inherently Christian. Indeed, I think of the entire Christmas holiday as more secular than not, though I guess my being Christian might have something to do with it. This idea is worth further exploring in the future, so expect more posts on the historical Christmas.
Update: Patrick Belton notes the strange correlations between Christmas Trees and Prostitution in Virginia.
Posted by Mark on December 03, 2003 at 11:31 PM .: link :.
Thursday, November 27, 2003
A Thanksgiving Cuisine Proposal
Last night I dined on fresh Sushi and washed it down with a generous portion of Hennepin (a fine beer, that). I was thinking of today's inevitable gorging and I had a brilliant idea.
If I had any photoshopping skillz, I'd have a really funny picture of a piece of sushi with a cartoon turkey head sticking out of it.
Anyway, the only thing I can't figure out is the seaweed. I'm not sure how that would go with this. Then again, throw in a sliver of gelatinous cranberry sauce with the cold turkey and you have an even better turkey roll. This is a huge market we're missing out on here! I'll be a millionaire in no time. Happy Thanksgiving all!
Posted by Mark on November 27, 2003 at 10:38 AM .: link :.
Wednesday, October 08, 2003
Annals of the Mathematically Challenged
Fritz Schranck relates a story of a mathematically challenged fast-food cashier whose register was broken and couldn't figure out how to make change (the customer had given the cashier $10 for a bill of $8.95). He goes on to say that he's heard these sorts of stories before, but he'd never seen it for himself untl then...
But I think I've got him beat. A few years ago, I happened to be perusing some titles at the 'tique, when someone asked the sales clerk what time it was. He picked up a watch, and a confused frown spread across his face. He then grinned, and grabbed a calculater from under the counter and began punching in numbers. At this point he responded to the customer's quizzical look by explaining "The watch is on military time." It was 1400 hours (aka 2:00 p.m.)
Posted by Mark on October 08, 2003 at 11:28 PM .: link :.
Monday, September 08, 2003
My God! It's full of stars!
What Galileo Saw by Michael Benson : A great New Yorker article on the remarkable success of the Galileo probe. James Grimmelmann provides some fantastic commentary:
Launched fifteen years ago with technology that was a decade out of date at the time, Galileo discovered the first extraterrestrial ocean, holds the record for most flybys of planets and moons, pointed out a dual star system, and told us about nine more moons of Jupiter.And the brilliance doesn't end there:
As if that wasn't enough hacker brilliance, design changes in the wake of the Challenger explosion completely ruled out the original idea of just sending Galileo out to Mars and slingshotting towards Jupiter. Instead, two Ed Harris characters at NASA figured out a triple bank shot -- a Venus flyby, followed by two Earth flybys two years apart -- to get it out to Jupiter. NASA has come in for an awful lot of criticism lately, but there are still some things they do amazingly well.Score another one for NASA (while you're at it, give Grimmelmann a few points for the Ed Harris reference). Who says NASA can't do anything right anymore? Grimmelmann observes:
The Galileo story points out, I think, that the problem is not that NASA is messed-up, but that manned space flight is messed-up.Is manned space flight in danger of becoming extinct? Is it worth the insane amount of effort and resources we continually pour into the space program? These are not questions I'm really qualified to answer, but its interesting to ponder. On a personal level, its tempting to righteously proclaim that it is worth it; that doing things that are "difficult verging on insane" have inherent value, well beyond the simple science involved.
Such projects are not without their historical equivalents. There are all sorts of theories explaining why the ancient Egyptian pyramids were built, but none are as persuasive as the idea that they were built to unify Egypt's people and cultures. At the time, almost everything was being done on a local scale. With the possible exception of various irrigation efforts that linked together several small towns, there existed no project that would encompass the whole of Egypt. Yes, an insane amount of resources were expended, but the product was truly awe-inspiring, and still is today.
Those who built the pyramids were not slaves, as is commonly thought. They were mostly farmers from the tribes along the River Nile. They depended on the yearly cycle of flooding of the Nile to enrich their fields, and during the months that that their fields were flooded, they were employed to build pyramids and temples. Why would a common farmer give his time and labor to pyramid construction? There were religious reasons, of course, and patriotic reasons as well... but there was something more. Building the pyramids created a certain sense of pride and community that had not existed before. Markings on pyramid casing stones describe those who built the pyramids. Tally marks and names of "gangs" (groups of workers) indicate a sense of pride in their workmanship and respect between workers. The camaraderie that resulted from working together on such a monumental project united tribes that once fought each other. Furthermore, the building of such an immense structure implied an intense concentration of people in a single area. This drove a need for large-scale food-storage among other social constructs. The Egyptian society that emerged from the Pyramid Age was much different from the one that preceded it (some claim that this was the emergance of the state as we now know it.)
"What mattered was not the pyramid - it was the construction of the pyramid." If the pyramid was a machine for social progress, so too can the Space program be a catalyst for our own society.
Much like the pyramids, space travel is a testament to what the human race is capable of. Sure it allows us to do research we couldn't normally do, and we can launch satellites and space-based telescopes from the shuttle (much like pyramid workers were motivated by religion and a sense of duty to their Pharaoh), but the space program also serves to do much more. Look at the Columbia crew - men, women, white, black, Indian, Israeli - working together in a courageous endeavor, doing research for the benefit of mankind, traveling somewhere where few humans have been. It brings people together in a way few endeavors can, and it inspires the young and old alike. Human beings have always dared to "boldly go where no man has gone before." Where would we be without the courageous exploration of the past five hundred years? We should continue to celebrate this most noble of human spirits, should we not?
In the mean time, Galileo is nearing its end. On September 21st, around 3 p.m. EST, Galileo will be vaporized as it plummets toward Jupiter's atmosphere, sending back whatever data it still can. This planned destruction is exactly what has been planned for Galileo; the answer to an intriguing ethical dilemma.
In 1996, Galileo conducted the first of eight close flybys of Europa, producing breathtaking pictures of its surface, which suggested that the moon has an immense ocean hidden beneath its frozen crust. These images have led to vociferous scientific debate about the prospects for life there; as a result, NASA officials decided that it was necessary to avoid the possibility of seeding Europa with alien life-forms.I had never really given thought to the idea that one of our space probes could "infect" another planet with our "alien" life-forms, though it does make perfect sense. Reaction to the decision among those who worked on Galileo is mixed, most recognizing the rationale, but not wanting to let go anyway (understandable, I guess)...
For more on the pyramids, check out this paper by Marcell Graeff. The information he referenced that I used in this article came primarily from Kurt Mendelssohn's book The Riddle of the Pyramids.
Update 9.25.03 - Steven Den Beste has posted an excellent piece on the Galileo mission and more...
Posted by Mark on September 08, 2003 at 11:06 PM .: link :.
Wednesday, August 27, 2003
Come Sail Away
Cruises really are wonderful vacations. I just returned from one, so, in an effort to induce massive jealosy in my readers, I figured I'd give a rundown of all the glorious events which occurred during the past week. I went on a cruise to Bermuda on the Celebrity line a few years back, so I'll be using that as a comparison. This time, I went to the Southern Caribbean on the Royal Caribbean line.
Getting There: The ship sails out of San Juan on Sunday, so you'll need to arrange a flight (uh, unless you're Puerto Rican, I guess), with all the shiny happy security details that implies in the post 9/11 airline world (it also jacks up the price of the overall vacation a little - my cruise to Bermuda left out of New York and so I didn't need to fly). We decided to go early and spend Saturday in San Juan. Given that we were staying at the Ritz-Carlton, this was a most pleasant experience and an excellent start to the vacation. I would highly recommend looking into this option as it was surprisingly inexpensive, and it really is a top notch resort with a fantastic private beach, a huge pool (which was great way to wash off sand), a nice little spa (which I didn't use, but looked great) and some good dining options (I had some Sushi, and was much pleased).
The Ship: Our ship was called the Adventure of the Seas and it was truly awesome (in every sense of that word). All the standard cruise-ship amenities are there: shuffleboard, food and drinks around every corner, pools, showrooms etc... but there are also quite a few uncruise-like activities such as a roller blading track, miniature golf course, ice skating rink, and rock climbing wall. There is this thing called the Royal Promenade, which is a sort of main-street of the ship, with a bunch of shops, bars and cafes (some of which are thankfully open all night). There's a Johnny Rocket's on board as well, just in case you were in the mood for a retro burger joint.
Food: The food was excellent. The main dining room was modeled after the Titanic's dining room, with extravagent settings and twisty staircases. For those who have never been on a cruise its difficult to explain just how great the dinners are. There is a different menu every night (each one has a healthy choice and a vegetarian choice as well, in case you were worried:P) and if you are ever torn between ordering two appetizers or entrees or deserts, they'll gladly bring them both out for you. Generally, we only ate dinner there (though I did manage a few lunches, which were surprisingly good), breakfast and lunch were had at the Windjammer Cafe and Caribbean Grill, a buffet that is usually open and provides a low-key alternative to the formality of the main dining room (I never did that though, as I enjoyed the main dining room). Celebrity is known for its superb dining, and Royal Caribbean did a good job but came up just a little bit short (still excellent though).
Entertainment: There is always something to do on a cruise ship. Always. Every day, you get an itinerary of all the things that are going on that day, and you've usually got a lot of options. Every night there is a show in the theater (some nights, there is an Ice Show, which is especially interesting when the ship is moving). Generally, though, I found myself in the Duck and the Dog British pub, doing stuff like this (for the uninitiated, that thing we're drinking is what's known as an Irish Carbomb). There was a guy playing guitar there every night, and he was awesome (his name was Mark O'Bitz, I can't find anything about him on the net though...). He played all week, and pretty much the same people came every night, so by the end of the week we were all having a blast. A couple of the passengers even got up and sang a song or two. The song that ended up being the cruise's theme was Come Sail Away - one of the passengers always got up and sang it, and he was absolutely marvelous. The whole bar got into it. It was great!
Ports: We docked at 5 ports during the week:
BINGO and Degenerate Gambling: Another cruise staple: BINGO! Alas, despite playing several sessions of BINGO, I did not win. I did, however, win a raffle! I got my choice of 6 paintings. I ended up choosing a painting by Anatole Krasnyansky. Its called Venice Yellow Sunset.
I like to gamble, and I finished almost every night on the cruise at the Casino. I ended up doing surprisingly well, though I think I might be developing a problem (just kidding, I was shocked at my restraint during the week. Whenever I was up by a certain amount, I walked, which is only way you can win at gambling in a Casino). I played a lot of blackjack, but my game of choice ended up being Roulette, which I had never played before. It was a lot of fun, but it is way too easy to drop lots of money...
Returning Home: Not much to say about the return, other than the airport security in Puerto Rico was very impressive. They were quick, efficient, and thourough (I even had to run my shoes through the x-ray machine with my carry-on).
So there you have it. I could probably go on and on and on about other things I loved about this cruise, but I'm not that cruel. If you have a vacation coming up, check out the cruise option (unless you get sea-sick).
Update 11.23.03 - Added a link to the painting. Also check out the comments for the profound effect Mark O'Bitz has had on many people's lives!
Posted by Mark on August 27, 2003 at 11:11 PM .: link :.
Friday, August 08, 2003
A few weeks ago, the regular weather guy on the radio was sick and a British meteorologist filled in. And damned if I didn't think it was the best weather forecast I'd ever heard! The report, which called for rain on a weekend in which I was traveling, turned out to be completely inaccurate, much to my surprise. I really shouldn't have been surprised, though. I know full well the limitations of meteorology, and weather reports can't be that accurate. Truth be told, I subcounsciously placed a higher value on the weather report because it was delivered in a British accent. Its not his fault, he can predict the weather no better than anyone else in the world, but the British accent carries with it an intellectual stereotype; when I hear one, I automatically associate it with intelligence.
Which brings me to John Patterson's recent article in the Guardian in which he laments the inevitable placement of British characters and actors in the villainous roles (while all the cheeky Yanks get the heroic roles):
Meanwhile, in Hollywood and London, the movie version of the special relationship has long played itself out in like manner. Our cut-price actors come over and do their dirty work, as villains and baddies and psychopaths, even American ones, while the cream of their prohibitively expensive acting talent Concordes it over the pond to steal the lion's share of our heroic roles. Either way, we lose.One could be curious why Patterson is so upset that American actors get the heroic parts in American movies, but even if you ignore that, Patterson is stretching it pretty thin.
As Steven Den Beste notes, this theory doesn't go too far in explaining James Bond or Spy Kids. Never mind that the Next Generation captain of the starship Enterprise was a Brit (playing a Frenchman, no less). Ian McKellen plays Gandalf; Ewan McGregor plays Obi Wan Kenobi. The list goes on and on.
All that aside, however, it is true that British actors and characters often do portray the villain. It may even be as lopsided as Patterson contends, but the notion that such a thing implies some sort of deeply-rooted American contempt for the British is a bit off.
As anyone familiar with film will tell you, the villain needs to be so much more than just vile, wicked or depraved to be convincing. A villainous dolt won't create any tension with the audience, you need someone with brains or nobility. Ever notice how educated villains are? Indeed, there seem to a preponderance of doctors that become supervillains (Dr. Demento, Dr. Octopus, Dr. Doom, Dr. Evil, Dr. Frankenstien, Dr. No, Dr. Sardonicus, Dr. Strangelove, etc...) - does this reflect an antipathy towards doctors? The abundance of British villains is no more odd than the abundance of doctors. As my little episode with the weatherman shows, when Americans hear a British accent, they hear intelligence. (This also explains the Gladiator case in which Joaquin Phoenix, who is Puerto Rican by the way, puts on a veiled British accent.)
The very best villains are the ones that are honorable, the ones with whom the audience can sympathize. Once again, the American assumption of British honor lends a certain depth and complexity to a character that is difficult to pull off otherwise. Who was the more engaging villain in X-Men, Magneto or Sabretooth? Obviously, the answer is Magneto, played superbly by British actor Ian McKellen. Having endured Nazi death camps as a child, he's not bent on domination of the world, he's attempting to avoid living through a second holocaust. He's not a megalomaniac, and his motivation strikes a chord with the audience. Sabretooth, on the other hand, is a hulking but pea-brained menace who contributes little to the conflict (much to the dismay of fans of the comic, in which Sabertooth is apparently quite shrewd).
Such characters are challenging. It's difficult to portray a villain as both evil and brilliant, sleazy and funny, moving and tragic. In fact, it is because of the complexity of this duality that villains are often the most interesting characters. That British actors are often chosen to do so is a testament to their capability and talent.
Some would attribute this to the training of the stage that is much less common in the U.S. British actors can do a daring and audacious performance while still fitting into an ensemble. It's also worth noting that many British actors are relatively unknown outside of the UK. Since they are capable of performing such a difficult role, and since they are unfamiliar to US audiences, it makes the films more interesting.
In the end, there's really very little that Patterson has to complain about, especially when he tries to port this issue over to politics. While a case may be made that there are a lot of British villains in movies (and there are plenty of villains that aren't), that doesn't mean there is anything malicious behind it; indeed, depending on how you look at it, it could be considered a complement that British culture lends itself to the complexity and intelligence required for a good villain we all love to hate (and hate to love). [thanks to USS Clueless for the Guardian article]
Posted by Mark on August 08, 2003 at 09:36 AM .: link :.
Friday, July 11, 2003
Dude, Where's My Dude? Dudelicious Dissection, From Sontag to Spicoli by Ron Rosenbaum : Dude, this is some seriously funny reading. The complete history of Dude, from its humble origins as a "aesthetic craze" in New York, circa 1883, to Dude, Where's My Car? in 2000.
Everybody thinks "dude ranch" came first and was somehow the origin. But whence came the dude in "dude ranch"? Before the dude-ranch dude there was dude as dandy, the dude as an urban aesthete; it was the urbanity of dude that made the dude-ranch dude dude-ish.This is so stupid, but its a smart stupid. Almost Pynchonian, really. Seriously, its a surprisingly complete article, worth reading if only to experience the whopping 160 or so occurrences of the term "Dude" or its derivatives. [via Ipse Dixit - Thanks Dude!]
Update: Unrelated, but interesting: A brief Googling of Pynchon and Dude turned up this article, also by Rosenbaum, about Pynchon and Phone Phreaking.
Posted by Mark on July 11, 2003 at 12:40 PM .: link :.
Sunday, July 06, 2003
I was playing Trivial Pursuit the other day, and I was again struck by the victimology that always seems to play out during such a game. "You get all the easy questions! Its no fair!" At times, that's probably true, but over the course of an entire game, its a little less clear who is really getting the short end of the stick. Ignoring for a moment what questions are considered easy (if I answer a question immediately after it was asked, was it an easy question?), this sort of victimology is a difficult thing to avoid. I definitely feel that way sometimes, but I'm beginning to come around. Besides, in the end there's really nothing you can do about it. Nobody said life would be fair.
Obviously, this doesn't just affect trivia games either. My first programming class in college was extremely difficult. The professor was a stickler for things like commenting and algorithmic efficiency (something we didn't even know how to measure yet), but he never told us these things. When we did an assignment, we'd get it back, all marked up to hell. "But it works! It does exactly what you said you wanted it to do!" Obviously, everyone hated this man, myself included. Only two As were given out in his class that semester, and I ended up with a B (and I wasn't too happy about that). Classes taught by other professors, on the other hand, were much simpler. However, during the course of the next year or so, it became abundantly aware to me that I learned a hell of a lot more than everyone else, so when it came time to buckle down and write an operating system (!) I ended up not having as much trouble as many other students.
It didn't work that way for everyone in the class. While I hated the professor, I never stopped trying. I ended up learning from my mistakes, while others bitched and moaned about how unfair it was. Ironically, even those in the "easy" classes were complaining about how difficult the course was.
So now its occurring to me that everyone feels like a victim. Take a little trip around the blogosphere and you'll see lots of protestations about the "liberal media". Then I head over to 4degreez and hear all the complaints about the "conservative media". Well, which is it? With respect to the media, everyone is a victim. Why is that?
I see both, all the time. The truth is that there are tons of both liberal and conservative media sources. You just have to know which is which and take them with the appropriate grains of salt. Yes, its frustrating, I know, but playing the victim leads to ruin and it prevents you from honing your arguments, making them stronger and more resistant to criticism.
Don't take this to mean that we should not be criticising the media. We should be, emphatically. Blogs are great for this in that they are fact-checking everyone and their mother, and will often print retractions of their own mistakes quickly and efficiently (alas, not all blogs are that trustworthy).
And really, the media could be doing a whole lot more to help us than it currently does, especially on the internet. On the internet, there are no compelling spacial boundries, no character limits. There is no reason complete interview transcripts or offical documents can't be posted along with an article. Hell, its the internet, link to other sources and even criticisms. Let us make up our own mind! Traditional media is awful at this, though I have seen at least some examples of this sort of thing around. The only "problem" with that is that the media could no longer misquote people on a whim or creatively skew statistics, simply because they don't like someone or something (if I had a dime for every time Wolfowitz was misquoted, I'd be a rich man. I know this because the DoD posts full transcripts of briefings, interviews, and press conferences on their site, much to the dismay of the media, who are now getting caught). There are tons of great ideas, none of which would be all that difficult to implement from a technical standpoint.
The media has lots of work to do, and with the increase of informational transparency in our society, they better get going. Soon. In the mean time, if you're conservative, look at the liberal media as an opportunity for strengthening your arguments. Don't bitch and whine about the liberal media and dismiss it out of hand. If your liberal, don't get pissed off that the media isn't repeating whatever new contradictory conspiracy theory you've concocted and take a page out of the bloggers book. Fact-check their asses!
Posted by Mark on July 06, 2003 at 01:21 PM .: link :.
Thursday, June 12, 2003
"You know the world is going crazy when the best rapper is a white guy, the best golfer is a black guy, the Swiss hold the America's Cup, France is accusing the U. S. of arrogance, and Germany doesn't want to go to war" - NothingLasts4ever
What a quote, what a world!
Posted by Mark on June 12, 2003 at 11:22 AM .: link :.
Sunday, May 11, 2003
To hit or not to hit, that is the question
Gambling is a strange vice. Anyone with a brain in their head knows the games are rigged in the Casino's favor, and anyone with a knowledge of Mathematics knows how thoroughly the odds are in the Casino's favor. But that doesn't stop people from dropping their paychecks in a few hours. I stopped by Atlantic City this weekend, and I played some blackjack. The swings are amazing. I only played for about an hour, but I am always fascinated by the others at the table and even my own reactions.
I don't play to win, rather, I don't expect to win, but I like to gamble. I like having a stack of chips in front of me, I like the sounds and the smells and the gaudy flashing lights (I like the deliberately structured chaos of the Casino). I allot myself a fixed budget for the night, and it usually adds up to approximately what I'd spend on a good night out. People watching isn't really my thing, but its hard not to enjoy it at a Casino, and that's something I spend a lot of time doing. Some people have the strangest superstitions and beliefs, and its fun to step back and observe them at work. Even though I know the statistical underpinnings of how gambling works at a Casino, I even find myself thinking the same superstitious stuff because its only natural.
For instance, a lot of people think that if a player sitting at their table makes incorrect playing actions, it decreases their advantage. Statistically, this is not true, but when that guy sat down at third-base and started hitting on his 16 when the dealer was showing a 5, you better believe a lot of people got upset. In reality, that moron's actions have just as much a chance of helping other players as hurting them, but that's no consolation to someone who lost a hundred bucks in the short time since that guy sat down. Similarly, many people have progressive betting strategies that are "guaranteed" to win. Except, you know, they don't actually work (unless they're based on counting, but that's another story).
The odds in AC for Blackjack give the House an edge of about 0.44%. That doesn't sound like much, but its plenty for the Casino, because they have an unfair advantage even if the odds were dead even. Don't forget, the Casino has deep pockets, and you don't. In order to take advantage of a prosperous swing in the game, you need to weather the House's streaks. If you're playing with $1000, you might be able to swing it, but don't forget, the Casino is playing with millions of dollars. They will break your bank if you spend enough time there, even if they didn't have the statistical advantage. That's why you get comps when you win. They're trying to keep you there so as to bring you closer to the statistical curve.
The only way you can really win at Blackjack is to have the luck of a quick streak and the willpower to stop while you're up (as I noted before, if you're up a lot, the Casino will do their best to keep you playing), but that's a fragile system - you can't count on that, though it will happen sometimes. The only way to consistently win at Blackjack is to count cards. That can give you the advantage of around 1% (more on certain hands, less on others) - depending on the House rules. This isn't Rain Man - you aren't keeping track of every card that comes out of the deck (rather, you're keeping a relative score of high value cards to low cards), and you don't get an automatic winning edge on every hand. Depending on the count, the dealer can still play consistently better than you - but the dealer can't double down or split, and they only get even money for Blackjack. That's where the advantage comes.
Of course, you have to have a pretty big bankroll to compensate for the Casino's natural "deep pockets" advantage, and you'll need to spend hundreds of hours practicing at home. Blackjack is fast and you need to be able to keep a running tab of the high/low card ratio (and you need to do some other calculations to get the true count), all the while you must appear to be playing normally, talking with the other players, dealing with the deliberately designed chaotic distractions of the Casino and generally trying not to come off as someone who is intensely concentrating. No small feat.
I'm not sure if that'd take all the fun out of it, not to mention draw the Casino's attention on me (which can't be fun), but it would be an interesting talent to have and its a must if you want to win. At the very least, it's a good idea to get the basic strategy down. Do that and you'll be better than most of the people out there (even if you just memorize the Hard Totals table, you'll be in good shape).
Posted by Mark on May 11, 2003 at 09:12 PM .: link :.
Tuesday, April 08, 2003
Living in Historic Times
"Wars have a way of overriding the days just before them. In the looking back, there is such noise and gravity. But we are conditioned to forget. So that the war may have more importance, yes, but still... isn't the hidden machinery easier to see in the days leading up to the event? There are arrangements, things to be expedited... and often the edges are apt to lift, briefly, and we see things we were not meant to...." - Thomas Pynchon, Gravity's Rainbow, page 474.Human beings tend to remember an uncompleted task better than a completed one, ostensibly because an uncompleted task has no closure, and thus our mind must continually work to acheive closure. This is a drastic oversimplification of what pyschologists call the Zeigarnik effect, and you can observe it in action in schools and restaruants across the world. Make a student take the same test he took the day before, and he'll probably do much worse. There are all sorts of similar psychological theories and, depending on how liberally you apply them, you observe them in action all over the place.
Which makes me wonder, how will we remember this war twenty years from now? How will Bush be perceived? If things continue to go as well as they have, will history remember that this war was immensely unpopular in the world or the seemingly conflicting and ambigious motives of the US? Bush and the "Coalition of the Willing" experienced several setbacks in the months leading up to this war, but now, in hindsight, they seem small and insignificant. One of the few things I like about Bush is the way he reacted to these small setbacks. He barely flinched and kept his eye firmly on his long view. Perhaps an application of the Zeigarnik effect on a historical level, Bush recognized that people will only remember how something ends, not the events, setbacks and all, that led us there. We've had a spectacularly successful start, now we just need to make sure it ends right... [Pynchon quote from War Words]
Posted by Mark on April 08, 2003 at 08:55 PM .: link :.
Thursday, August 29, 2002
James Grimmelmann has revitalized the Laboratorium. He started blogging again, and since I mostly missed out on it last time, that makes me happy because its a pleasure to read his stuff. For the past year or so, he's been experimenting with various forms of writing and new web tools (that dam twiki-web thing that doesn't seem to work all that well) but has largely neglected the site with updates coming only spuratically. It looks as if he's going to stick to it this time, though (which is more than I could say for myself!) Do yourself a favour and check him out.
The "return of saturn," is a popular theme derived from astrology and is often used in literature (among other art, such as music) as a symbol for a period of change in a person's life. Metaphorically speaking, you could say that James' Saturn is returning. I'm not sure how old he is, but this may even be true in the asrological sense, not that it would really matter. In any case, I was thinking about that idea when I came across James' revision, so that's why I named the post "Saturn Ascends". And you know how much I love cataloging lifes little footnotes...
Posted by Mark on August 29, 2002 at 08:51 PM .: link :.
Monday, August 05, 2002
Kryptonian Love Problems
Man of Steel, Woman of Kleenex by Larry Niven : A funny and very graphic (you were warned) description of the physiological problems Superman would face if he were to attempt to procreate. Niven is best known for his Science Fiction novels, most notably Ringworld (and its sequels), but he shows a biting sense of humour in this essay... Also, as an interesting side note, the influence of this article can be witnessed in Kevin Smith's Mallrats:
Brodie: It's impossible, Lois could never have Superman's baby. Do you think her fallopian tubes could handle his sperm? I guarantee he blows a load like a shotgun, right through her back. What about her womb, you think it's strong enough to carry his child?When compared to Niven's article, the only new thing is the kryptonite condom bit, but its funny nonetheless... Still, Niven's article is great...[thanks to Jim Miller]
Posted by Mark on August 05, 2002 at 06:12 PM .: link :.
Monday, July 22, 2002
Surely You're Joking, Mr. Feynman!
Cargo Cult Science by Richard Feynman : Feynman's classic scathing critique of the pseudo-science typified by the "cargo cult" of South Sea islanders:
In the South Seas there is a cargo cult of people. During the war they saw airplanes with lots of good materials, and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head to headphones and bars of bamboo sticking out like antennas--he's the controller--and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land.You see this sort of thing often, usually done purposely in order to advance a certain agenda. As Feynman notes, one of the classic examples is advertising. "Wesson oil doesn't soak through food" - well, that's true. But what's missing is that no oils soak through food (when operated at a certain temperature, which is an additional misleading implication). To do away with this, Feynman makes a few suggestions:
In summary, the idea is to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgement in one particular direction or another.These practices are indeed very important, and are often glossed over in the name of brevity or to save money... don't allow yourself to be fooled by silly correlations and inflated numbers. I've found that there are a lot of issues that are quite simply on the outside, but when you dig deep, you find lots of contradicting information, making the issue that much more complex... [link found via USS Clueless in the midst of a discussion of international law, though the entry about "benchmarks" of Macs also seems relevant]
Posted by Mark on July 22, 2002 at 05:47 PM .: link :.
Saturday, July 13, 2002
Call Me Lenny by James Grimmelmann : Taco Bell is running a new ad called "Chef Wars" and it is an Iron Chef parody. The commercial is pathetic and James laments that Iron Chef is no longer considered to be a piece of elite culture. Essentially, Iron Chef is no longer cool because it has become so popular that even culturally bereft Taco Bell customers will understand the reference.
As a long time fan of Iron Chef, I suppose I can relate to James. Several years ago, a few drunk friends and I discovered Iron Chef one late night and fell in love with it. In the years that followed, it has grown more and more popular, to the point where there was even an pointless American version (hosted by Bill Shatner) and a rather funny parody on Saturday Night Live. Seeing those things made it less fun to be an Iron Chef fan, and to a certain extent, I agree with that point. But in a different way, Iron Chef is just as cool as it ever was and, in my mind, a genuinely good show is well... good, no matter how popular it is.
As commentor Julia (at the bottom) notes, there are two main issues that James is hitting on:
I suppose it all comes down to exclusion. Things are cool, in part, because you are cool enough to recognize them as such. But if everyone is cool, what's the point? Which brings us to Malcolm Gladwell and his Coolhunt:
"In this sense, the third rule of cool fits perfectly into the second: the second rule says that cool cannot be manufactured, only observed, and the third says that it can only be observed by those who are themselves cool. And, of course, the first rule says that it cannot accurately be observed at all, because the act of discovering cool causes it to take flight, so if you add all three together they describe a closed loop, the hermenuetic circle of coolhunting, a phenomenon whereby not only can the uncool not see cool but cool cannot be even adequately described to them."But is it cool to just recognize something as cool? James recognized Iron Chef as cool, but he didn't really enjoy it. So I guess that we should seek the cool, but not be fooled into thinking something is cool simply because it is going to be big one day...
Posted by Mark on July 13, 2002 at 02:19 PM .: link :.
Wednesday, July 10, 2002
The Post 9/11 Doubt
A Heartbreaking Work of Staggering Evil by Nick Mamatas : Neil Gaiman's oeuvre, and the genre of horror/fantasy in general, is typically looked down upon as unsophisticated or childish, and the past decade saw a marked decrease in the Horror genre's relevance.
"9-11 resembled cheap, lazy fiction, and because it did, it made it strange for writers to decide what is valid artistically."Horror was beginning to find new voices and new readers even before the attacks on the WTC, but now, after a initial period of doubt, there appears to be a renewed interest in the genre... "The everyday twisted horribly awry is, of course, the state of the nation post-9-11." Will Horror become popular again because it evokes fear of the magnitude we all felt on September 11? Time will tell. [thanks BJ]
Just to rewind a bit, I think the period of doubt mentioned above is a very important phenomenon, and I can see it happening all over the place. My very own weblog here, for instance, is a good example. I had posted fairly regularly up until September, focusing mainly on Film and various interesting articles on culture and whatnot, but after 9/11 my posting dropped off sharply and has been irregular ever since. The reason for this, I think, was because I felt that there were more important things in life than my stupid blog. It just seemed so futile. There are certainly other factors, personal and professional, that also contributed to the dropoff, but I also think I needed to re-examine my goals here. My post 9/11 entries were scarce, and they began to lean more towards politics, as I became determined to keep up on current events. But I didn't want to become a warblogger (I still don't), and this limited my ability to post because I didn't want every entry to be about the latest bullet flying over in the Middle East. So I'm hoping that I can live up to the demands of My Shifting Paradigm...
Posted by Mark on July 10, 2002 at 10:37 PM .: link :.
Sunday, April 14, 2002
Clowns are Scary
Blanky the Clown by riverrun : An E2 peice by the ever brilliant riverrun in which he admits more than a passing discomfort for clowns. In fact, they scare the shit out of him. Given his tale of Blanky, the resident clown in his home town, you could hardly blame him. Though I'll admit a passing discomfort for clowns (and, in fact, the entire carnival setting kinda creeps me out), I've had the fortune of never really crossing paths with them. Anyway, riverrun gives a very brief history of clowns, which have been around for quite some time, followed by the somewhat disturbing tale of Blanky.
Posted by Mark on April 14, 2002 at 10:39 PM .: link :.
Monday, February 25, 2002
The Physical Genius and The Art of Failure by Malcolm Gladwell: An interesting duo of pseudo-related articles. The first posits the existence of a "physical genius", someone who posesses an "affinity for translating thought into action". The ironic thing about a physical genius, however, is that they really can't be described by cut-and-dry measurements of athleticism (in other words, there is no measuring stick like IQ for a physical genius). There is, in fact, much more to it than merely performing act itself; its knowing what to do. In the other article, The Art of Failure, Gladwell posits that there are two different types of failing: regression and panicking. Regression is when you become so self conscious that you are thinking explicitely about what to do next instead of relying on your instincts and reactions (which you work hard to put into place; years of tennis lessons will give you an innate tennis sense, so to speak - but if you explicitely start think about each step, you will fail). Panicking is a sort of tunnel-vision, in which you are so concerned about one problem, you forget that you already know the usually simple solution.
Of course, Gladwell makes the points ever more elegantly than I just did. In fact, I've found almost all of Gladwell's work fascinating, well researched, and well thought out. I found these two articles interesting because it seems that the physical genius doesn't really regress back to their explicit mode of operation. Why? I think it might be because they never learned these things explicitly, at least, not the same way in which your average person does. They just know what to do, and they do it. I guess that's why they are called "geniuses".
Posted by Mark on February 25, 2002 at 08:46 PM .: link :.
Thursday, February 21, 2002
Disgruntled, Freakish Reflections™ on Happiness
Civilization, Thermodynamics, and 7-Eleven : "Man has never really solved problems so much as exchange one set for another, and what we call progress has simply been a series of shrewd trades that, while never reaching utopia, have at least left us with more desirable issues than the ones before." Everything has advantages and disadvantages, and we attempt to maximize our advantages while minimizing our disadvantages. But you'll notice that the disadvantages are never really eliminated. This is all well and good, but why do so few people see it? Its almost like we were raised to be unhappy. We're shown what we don't have, we learn that success means winning trophies and money, and that happiness relies on how much stuff we have. We're expected to live our life in constant, multi-orgasmic bliss, and if we find ourself unhappy, then we're a failure. Of course, since we don't live in a Utopia, we will always be unhappy, and thus we will always be seeking new trophies to make us happy. Striving for self-improvement isn't wrong (its quite honorable), but it won't necissarily make you happier. All too often, we set our sights on that one mystical thing that, if we could just achieve it, would make us happy. The only problem is, if you can't be happy now, chances are, you won't be happy in the future, even if you do achieve your goals.
To paraphrase Dennis Miller, happiness doesn't always require resolution, but rather an in the moment, carefree acceptance of the fact that the worst day of being alive is better than any day of being dead. Happiness isn't settling for less, its just not being miserable with what you've got. So reach for the stars, but remember, you're just trading one set of disadvantages with another, and you might not be any happier than you are now...
Posted by Mark on February 21, 2002 at 01:02 PM .: link :.
Thursday, January 24, 2002
Wing Bowl X
Every year, on the Friday before Super Bowl Sunday, Philadelphians gather at the First Union Center for a different type of contest: The Wing Bowl. A tradition that started 9 years ago, the annual Wing Bowl festivities start at the crack of dawn. The audience tailgates in the parking lot while the contestants prepare to eat as many Buffalo wings as possible in a 30-minute time-span. Its become a hallmark of Philly life, with more than 20,000 people showing up for last years event. Only in Philly. Last year's winner is nicknamed "El Wingador", and he ate 137 wings in 30 minutes (the highest score of all time was 164 wings!)
Particularly interesting, and more disgusting than eating 100+ wings in 30 minutes, are the Qualifying Stunts performed by wing bowl hopefuls. A good stunt typically includes some sort of gross variety of food, eaten quickly and in mass quantities (strange, as I would think that has little to do with your wing-eating ability). Highlights this year include people eating: Four pounds of tripe in 20 minutes, a pigs head (including snout, cheek and the brain), A dozen hard boiled eggs with shells in 24 minutes, fifty raw clams in fifteen minutes, three pounds of head scrapple and bottle of hot sauce in 20 minutes, and one pound uncooked penne pasta with only 8 ounces of water in 20 minutes. Only in Philly...
1/25/02 - Update: El Wingador does it again. 143 total wings (81 in the first 14 minnutes). Three time champ. I love the nicknames these guys have; there was a 15 year old student in the contest - his nickname is Lord of the Wings...
Posted by Mark on January 24, 2002 at 11:27 AM .: link :.
Wednesday, November 07, 2001
No Whammy, no Whammy, STOP!
Back in May of 1984, history was made as Michael Larsen, an unemployed ice cream truck driver from Ohio, managed to win $110,237 on the classic CBS television game show Press Your Luck. Having watched Press Your Luck since it premiered, Larsen came to the conclusion that the swift, seemingly random flashing lights that bounced around the Press Your Luck board were not as random as they seemed. By taping the show religiously and pausing the tapes, Larsen discovered that there were just six light patterns on the board. With this bit of knowledge, he practiced at home while watching the show and realized that he could stop the board wherever and whenever he wanted, if he just had patience. The article is worth visiting, if only to see the looks on the host's face as Larsen racked up the dough. Ironically, Larsen eventually wound up losing all his winnings in a bad housing investment deal.
Posted by Mark on November 07, 2001 at 11:59 AM .: link :.
Wednesday, October 10, 2001
Planetarium is an on-line puzzle story in twelve weekly instalments. The story is presented one week at a time; each week containing three puzzles. At the end of the twelve weeks, the answers to the thirty-six puzzles can be put together to solve a metapuzzle, which ties back into the plot of the story. Planetarium is primarily a story, so it doesn't matter if you solve the puzzles or not; they'll tell you the answers after twelve weeks anyway. Each Planetarium instalment consists of an illustration of a scene in the story, framed in a border with other puzzle elements and buttons. Clicking on the characters (or objects) within the illustration evokes text relating to that character - perhaps a dialogue they are having with another character, or part of the story narrative, or possibly a riddle that the character is presenting. I'm only on the first week, but I think I'm hooked.
I found this link via Mindful Link Propagation, which is notable in and of itself, as it is the latest project over at the Laboratorium and it contains many interesting and thoughtful links.
Posted by Mark on October 10, 2001 at 11:58 AM .: link :.
Tuesday, October 09, 2001
The Fifty Nine Story Crisis
In 1978, William J. LeMessurier, one of the nation's leading structural engineers, received a phone call from an engineering student in New Jersey. The young man was tasked with writing a paper about the unique design of the Citicorp tower in New York. The building's dramatic design was necessitated by the placement of a church. Rather than tear down the church, the designers, Hugh Stubbins and Bill LeMessurier, set their fifty-nine-story tower on four massive, nine-story-high stilts, and positioned them at the center of each side rather than at each corner. This daring scheme allowed the designers to cantilever the building's four corners, allowing room for the church beneath the northwest side.
Thanks to the prodding of the student (whose name was lost in the swirl of subsequent events), LeMessurier discovered a subtle conceptual error in the design of the building's wind braces; they were unusually sensitive to certain kinds of winds known as quartering winds. This alone wasn't cause for worry, as the wind braces would absorb the extra load under normal circumstances. But the circumstances were not normal. Apparently, there had been a crucial change during their manufacture (the braces were fastened together with bolts instead of welds, as welds are generally considered to be stronger than necessary and overly expensive; furthermore the contractors had interpreted the New York building code in such a way as to exempt many of the tower's diagonal braces from loadbearing calculations, so they had used far too few bolts.) which multiplied the strain produced by quartering winds. Statistically, the possibility of a storm severe enough to tear the joint apart was once every sixteen years (what meteorologists call a sixteen year storm). This was alarmingly frequent. To further complicate matters, hurricane season was fast approaching.
The potential for a complete catastrophic failure was there, and because the building was located in Manhattan, the danger applied to nearly the entire city. The fall of the Citicorp building would likely cause a domino effect, wreaking a devestating toll of destruction in New York.
The story of this oversight, though amazing, is dwarfed by the series of events that led to the building's eventual structural integrity. To avert disaster, LeMessurier quickly and bravely blew the whistle - on himself. LeMessurier and other experts immediately drew up a plan in which workers would reinforce the joints by welding heavy steel plates over them.
Astonishingly, just after Citicorp issued a bland and uninformative press release, all of the major newspapers in New York went on strike. This fortuitous turn of events allowed Citicorp to save face and avoid any potential embarrassment. Construction began immediately, with builders and welders working from 5 p.m. until 4 a.m. to apply the steel "band-aids" to the ailing joints. They build plywood boxes around the joints, so as not to disturb the tenants, who remained largely oblivious to the seriousness of the problem.
Instead of lawsuits and public panic, the Citicorp crisis was met with efficient teamwork and a swift solution. In the end, LeMessurier's reputation was enhanced for his courageous honesty, and the story of Citicorp's building is now a textbook example of how to respond to a high-profile, potentially disastrous problem.
Most of this information came from a New Yorker article by Joe Morgenstern (published May 29, 1995) . It's a fascinating story, and I found myself thinking about it during the tragedies of September 11. What if those towers had toppled over in Manhattan? Fortunately, the WTC towers were extremely well designed - they didn't even noticeably rock when the planes hit - and when they did come down, they collapsed in on themselves. They would still be standing today too, if it wasn't for the intense heat that weakened the steel supports.
Posted by Mark on October 09, 2001 at 08:04 AM .: link :.
Monday, September 10, 2001
I Play Too Much Solitaire, and it's Putting Me in a Time Warp by Douglas Coupland : Why do I choose to waste time playing solitaire? And why will I, in all likelihood, cheerfully continue to waste thousands more hours playing solitaire? These are questions Coupland, and no doubt, millions of others, have pondered. Interestingly enough, I find that this spills over into much more than solitaire. What of my thousands of NHL 98 or Unreal Tournament games? Or the countless hours spent trolling the net? Time wasted? Perhaps. Will I continue to waste it? Undoubtedly. Why? I have no idea. Coupland's father used to play solitaire all the time, and now, thanks to a computer, he still plays almost every day. When asked why, he replies:
"That's easy. Every time I press the key and it deals me a new round, I get this immense burst of satisfaction knowing that I didn't have to shuffle the cards and deal them myself. Its payback time for all the hours I ever wasted in my life shuffling and dealing cards."Which brings me to the thought that maybe we aren't really wasting time at all. Maybe we just need to realize that the past is gone, whether we like it or not. By the way, I found Coupland's site insightful and fun, though I'm a bit annoyed at the use of Flash (is it really necessary to put a full text article into flash? It sure as hell makes it difficult to pull quotes!)
Posted by Mark on September 10, 2001 at 11:15 AM .: link :.
Friday, August 31, 2001
Someone is a werewolf. Someone ... in this very room.
Werewolf is a simple game for a large group of people (seven or more.) Two of the players are secretly werewolves. They are trying to slaughter everyone in the village. Everyone else is an innocent human villager, but one of the villagers is a seer (can detect lycanthropy). Some people call it a party game, but it's a game of accusations, lying, bluffing, second-guessing, assassination, and mob hysteria. Sounds like a blast to me. [via metafilter]
I recently participated in a similar game called "The Mole" in which there are two teams which are trying to complete certain tasks, except that there's a sabateur (a "mole") on each team. Of course, my team emerged victorious, thanks mostly to a brilliant strategy in the opening round, resulting in a commanding lead for my team. The other team became a little bitter about that, as evidenced by this highly biased, but also hilarious mock review of the event (I am the one referred to as "Mark" in said review).
Posted by Mark on August 31, 2001 at 02:37 PM .: link :.
Wednesday, August 15, 2001
The Mob is an American business institution. Killing people is just part of the business, but it's a very costly part. Cops look the other way for burglary or hijacking, but not for murder. The press and the public don't generally tolerate this sort of thing, and yet, those very murders that bring the most powerful wrath of law enforcement and public scrutiny down on the Mob are responsible for their greatest cultural legacy. [Warning: graphic images ahead - proceed at your own risk] Who can forget the picture of Carmine Gallante sprawled on a restaurant floor, cigar in his mouth? Or the bloody picture of Ben "Bugsy" Siegal, his face pretty much blown off? These infamous Mafia hits stick in our consciousness longer than any degree of bootlegging or hijacking ever could
Update: Removed links to images because Google images was acting funny.
Posted by Mark on August 15, 2001 at 09:25 AM .: link :.
Thursday, July 12, 2001
Everyone has had a terrible customer support experience at least once in their life. Those who are cursed into having to deal with customer service often would do well to learn The Art of Turboing. Turboing, essentially, refers to the actions of a customer who goes around the normal technical support process by contacting a senior person in the chain of command. The article does a great job describing the process and how to go about it. The idea of Turboing sounds worse than it is, but it is also made clear that you should turbo only when you've exhausted all other avenues of support and hit a dead end. So go forth, my service-maligned readers, and Turbo your way to victory. Or something. [via memepool]
Some good stuff being discussed over at DyREnet's message board. First, it seems that Drifter has revealed the great secrets of Man.com (the mystery that started with a cryptic and utterly annoying Tandem Story entry on this page). Also, check out the discussion on Coke, including my own moronic exploits with cola.
Posted by Mark on July 12, 2001 at 02:57 PM .: link :.
Wednesday, July 11, 2001
Searching for Bobby Fischer
A Mystery Wrapped in an Enigma by William Lombardy : A 1974 Sports Illustrated article providing a detailed account of Bobby Fischer's struggle and eventual victory in the 1972 World Chess Championship. I've never been much good at Chess, but I have a certain fascination and respect for those who are. Fischer comes off as emotionally unstable in the article, but I have this sneaking sort of suspicion that every little move (or complaint) he made was calculated. Sometimes he won before he even entered the arena. But then, he is definitely an odd person as well, so who really knows?
Posted by Mark on July 11, 2001 at 04:49 PM .: link :.
Thursday, July 05, 2001
Probabilities in the Game of Monopoly has all the numbers you could ever possibly need to play Monopoly more efficiently; most probable squares, how long it takes for investments to pay off, which properties are better to mortgage, where to build hotels, which squares get landed on first.
The railroads are excellent investments, particularly when owned together, although in absolute income terms they don't keep up with heavily built on properties later in the game. The best return on investment to be found is from putting a third house on New York Avenue. In fact, the third house has the fastest payoff of any building on almost all of the properties. The square most landed on other than Jail is Illinois Avenue, and in fact a hotel there will bring the most income other than a hotel on Boardwalk. By far the worst individual investment is to buy Medeterranean Avenue without first owning Baltic. That's not to say that you shouldn't buy it, but it's not going to make you much money without quite a bit of construction. The properties between the Jail square and the Go To Jail square are landed on the most, because of the jump caused by landing on Go To Jail. The Orange ones have the biggest bang for the buck as far as building goes.All the probabilities were conducted with a long term computer simulation. I suppose this whole thing may seem excessive, but it is quite interesting and nice to know that the orange properties are the best to own and build on. The simulations do not, however, take into account all the shady dealings between players (I'll trade you St. Charles Place, which will give you a monopoly, for Baltic Ave. and 5 free passes on any of your properties) that can be ever-so-crucial to the outcome of the game. [via Bifurcated Rivets]
Posted by Mark on July 05, 2001 at 01:01 PM .: link :.
Friday, June 29, 2001
Is It O.K. to Be a Luddite? by Thomas Pynchon : Luddite. It sounds like an element doesn't it? Basically, a Luddite is someone who opposes technology. Pynchon tackles the subject with his usual gusto:
Except maybe for Brainy Smurf, it's hard to imagine anybody these days wanting to be called a literary intellectual, though it doesn't sound so bad if you broaden the labeling to, say, "people who read and think." Being called a Luddite is another matter. It brings up questions such as, Is there something about reading and thinking that would cause or predispose a person to turn Luddite? Is It O.K. to be a Luddite? And come to think of it, what is a Luddite, anyway?Pynchon goes into the history of Luddites, from the Ned Lud, straight through to Frankenstein and Star Wars references - oh, and lets not forget that all important folk hero, the Badass. Theres something about scholarly discussion of the Badass that I just find compelling. Anyway, if anyone wants to give themselves a headache, check out Pynchon's acclaimed classic Gravity's Rainbow (and for people who want to lessen the strength of said headache, you can buy a 345 page book containing the Sources and Contexts for Pynchon's Novel). Actually, from what I've read of it (which is, admittedly, not much), its quite good. [via wood s lot]
Posted by Mark on June 29, 2001 at 02:31 PM .: link :.
Tuesday, June 19, 2001
How Science Ignores the Natural World
Where the Buffalo Roam - How Science Ignores the Natural World : An interview with Vine Deloria, one of the most important living Native American writers. Central to Deloria's critique of Western culture is the understanding that, by subduing nature, we have become slaves to technology and its underlying belief system.
"...Indians experience and relate to a living universe, whereas Western people - especially scientists - reduce all things, living or not, to objects. The implications of this are immense. If you see the world around you as a collection of objects for you to manipulate and exploit, you will inevitably destroy the world while attempting to control it. Not only that, but by perceiving the world as lifeless, you rob yourself of the richness, beauty, and wisdom to be found by participating in its larger design."This is the sort of thing you don't hear very often and its very interesting. Deloria makes some great points (along with some I don't particularly agree with, but are interesting nonetheless), especially about science and how it attempts to reduce everything to a paradigm. Doing so certainly has its value, but much like every other version of reality that is forwarded, science is not completely satisfactory.
"...the point is to ask the questions, and keep asking them."Right on. [via liquid gnome]
Posted by Mark on June 19, 2001 at 11:49 AM .: link :.
Wednesday, June 06, 2001
Structured Procrastination : an amazing strategy that converts procrastinators into effective human beings, respected and admired for all that they can accomplish and the good use they make of time. I like this optomistic approach, turning a weakness into a strength. This website itself is basically another project in a long series of attempts aimed at avoiding responsibility. Its funny how I have always noticed this situation, where I seem to be at my most creative when I've got tons of important stuff I should be doing, but never got around to articulating it like this guy did. [via cafedave.net]
Procrastination: "Hard work often pays off after time, but laziness always pays off now."
Posted by Mark on June 06, 2001 at 08:49 AM .: link :.
Monday, May 14, 2001
Football gets in touch with its feminine side: The Philadelphia Liberty Belles are one of 10 charter members of the National Women's Football League. The 45-woman team, which plays on high school fields and travels by bus, has romped over its first three opponents in an eight-game schedule that runs from April to June. The players buy their own uniforms, pay their own insurance, and raise money with car washes. And they don't earn a cent, despite their ass kicking performace. So far the Belles have schlacked three opponents by a combined score of 106-6. I've never seen them play, but I imagine it being quite an entertaining experience; not just because its women, but because they're genuinely in love with the game of football. Some day, when and if they become profitable, the league might lose that quality, so I hope to catch a game soon...
Posted by Mark on May 14, 2001 at 08:53 AM .: link :.
Thursday, May 10, 2001
Hope and Gory
Chuck Palahniuk (author of Fight Club) writes about the Olympic wrestling trials. Amateur wrestling, not WWF or any of its ilk. The article for the most part gets it right. I was a wrestler. I have cauliflower ear. I cut too much wieght. I've walked off the mat and puked in a trash can. I broke my thumb once. I had ringworm. I did it all. And I wasn't even that good. So why did I do it? For the life of me, I really can't nail down a solid answer to that question, yet I know that if I could do it again, I would. Palahniuk focuses mostly on the physical pains of wrestling, but there's more to the sport than pain. Pain is a part of it, and its not a bad thing either (and Palahniuk does a good job describing this), but theres a lot of technique, elegance, and beauty in the sport as well. Sometimes it just takes a wrestler to recognize it when its happening. Which, I suppose, is why the sport has such a wierd reputation...
Posted by Mark on May 10, 2001 at 02:04 PM .: link :.
Friday, April 20, 2001
File this under "Corny"
The Collective Unconsciousness Project is an interesting attempt at creating a non-linear experience based on chance and the user's interactions. Users can contribute to the site by logging their dreams, then they can explore the dreams, which will be an environment that will allow you to travel from dream to dream in a non-linear yet interconnected way - without being made aware of what those connections are, and without being in control of the path you take. The flow will be based on things like the dream you are currently viewing, what you've viewed in the past, what dreams you've entered into your dream log, what emotions are related to that dream, etc. Unexpected connections will be made, with hopefully interesting results. Its not functional yet (not enough people have entered dreams yet), but once it is, I think it would be worth viewing... Go and enter your dreams now (no registration required).
Posted by Mark on April 20, 2001 at 04:41 PM .: link :.
Tuesday, March 20, 2001
UAIOE for you and me
This Evolution of Alphabets page brings a little known subject to life with sensible, concise animations. You can see the evolution of eight character sets, including our very own Latin character set. Its always nice to see people using web animation for something useful. [via blog.org]
Posted by Mark on March 20, 2001 at 01:09 PM .: link :.
Saturday, March 17, 2001
In the movie Pi, there are several scenes where the movie's protagonist takes a break from his work to visit his teacher and mentor. During these visits, they play an ancient asian game called Go. Basically, the Go board has a grid and some black and white stones. The rules of Go are incredibly simple, yet mastering the game is a lifelong, and sometimes life-consuming, effort. Indeed, the game is much more than just a game to its devoted players. Some people kill themselves when they lose. Some do it for a living. Some people even believe that it could save our public education system. For others, it represents the Holy Grail of computing (as it is incredibly difficult to program). Pi was originally supposed to pit student and mentor against each other in a game of chess, but they changed it to Go, and the movie benefits greatly. For Go reflects the common themes of the movie; Go represents a certain synthesis between spiritual and rational life...[thanks alt-log]
Posted by Mark on March 17, 2001 at 09:47 AM .: link :.
Thursday, February 22, 2001
Trapped Inside the Box
In yesterday's exercise, we saw that thinking outside the box was important, but that certainly doesn't mean thinking inside the box isn't important. It is often useful to quickly classify someone or something based on a small set of criteria which may or may not give an accurate description of said person. Its very similar to the information filtering Umberto Eco spoke about in that interview I posted a while back. In certain situations, we absolutely must revert to simple mental models just to filter all the information coming in to us. It doesn't matter how imperfect that filter is, we just need something or else we won't accomplish anything. I'm also fascinated by the ingenuity of people who are forced to think within a box (and the ways they work around it). My favourite example is Isaac Asimov's 3 Laws of Robotics:
Posted by Mark on February 22, 2001 at 06:12 PM .: link :.
Wednesday, February 21, 2001
Thinking Outside of the Pie
A simple exercise:
The circle to the right represents a pie. Your goal is to cut this pie into 8 pieces using only three lines. Have at it!.
Solution (swipe text below):
The trick to figuring this out is thinking three-dimensionally. First, quarter the circle with two lines (or slices, if you will). Then remember that there is a third dimension that cannot be seen in the picture. If you were to cut along that axis, you would have 8 pieces of pie!
Posted by Mark on February 21, 2001 at 06:30 PM .: link :.
Sunday, February 18, 2001
DyREnet has some useful tips for better living. Samæl's extremely happy with his new Houseplant, while Spencer was let down by her Papermate-Comfort Mate, medium ball, black ink, click-action, writing pen after years of support. DyREnet also has some new and spiffy random taglines. Some of my favourites include: "Still legal in sixteen states.", "no Subliminal mEssages eXistant here", "We're not quite the downfall of man, but we're trying.", and "The masses have spoken; we just didn't listen." Keep it up, DyRE, and I'll have to kill you.
Uh, well, maybe not.
Posted by Mark on February 18, 2001 at 08:43 PM .: link :.
Thursday, January 25, 2001
When Minotaurs Attack!
Theseus and the Minotaur, an addictive java applet game that is also quite difficult. Theres also a history of Theseus and the Minotaur Mazes and other (easier) mazes. [thanks to eatonweb]
Posted by Mark on January 25, 2001 at 01:11 PM .: link :.
Wednesday, January 24, 2001
A Conversation on Information
Umberto Eco is a professor of semiotics, philosophy and literature at the University of Bologna in Italy, and he is well known for his academic publications as well as popular fiction such as The Name of the Rose and Foucault's Pendulum (which I am currently reading). In this interview, Eco discusses the Internet, information overload and filtering, hypertext , hypermedia and virtual reality. He was very open minded and articulate in his descriptions and criticism of the internet and information filtering, especially given that the internet was not very developed at the time.
"I am not saying that Internet is, or will be a negative experience. I am saying on the contrary that it is a great chance. Once we have asserted this, I am trying to isolate the possible traps; the possible negative aspects."Much time is spent discussing information filtering, and why it is necessary to go about such things and how it becomes difficult on a system like the internet because the amount of options is often overwhelming (like going to google and typing Umberto Eco and getting back 61,200 results). Another topic is communities on the internet. He is enthusiastic at the possibilities but he adds that the information still must be filtered. You must choose which posts and authors you wish to read, and we often choose them randomly, but if we had a filter we could know which posts are important and which are crap. Regardless, he likes the idea of finding new ideas and perspectives through the internet community. "Is that a substitute for face-to-face contact and community? No, it isn't!" Fascinating stuff.
Posted by Mark on January 24, 2001 at 12:28 PM .: link :.
Tuesday, December 19, 2000
Check out The Window, for role playing the way it should be ("simple, usable, and universal"). The Three Precepts on which it is based are solid and actually contribute to the storytelling aspects of RPGs (as evidenced in the third precept: "A good story is the central goal." ) Check it out, I found it fascinating (and I don't even play RPGs anymore). In fact, some of those ideas there have inspired me to perhaps create a different form of Tandem Story...
Posted by Mark on December 19, 2000 at 01:51 PM .: link :.
Monday, December 18, 2000
Ushering in Twelve Eighteen
Yes, today is twelve eighteen. What, you may ask, is twelve eighteen? Well, its one two one eight. Before you ask, one two one eight is twelve eighteen. What the hell does this have to do with anything? Everything, of course. Chaos theorists have pondered those stories carefully (specifically the Yankee Stadium incident and the mathematics of 1218), and some believe them to be central in gaining the necessary understanding of the universe.
Posted by Mark on December 18, 2000 at 12:23 PM .: link :.
Thursday, December 14, 2000
Ever wonder what the airlines do with your luggage? Sure, they claim 97% of lost luggage are returned to their rightful owners within 24 hours and another 1.5% within 2 days, but what about the remaining 1.5%? Well, after 6 weeks, they sell it (and going by the percentages, this works out to be somewhere around 435, 000 bags). Apparently most of the lost bags end up in a small Alabama town at the Unclaimed Baggage Center, where they, in turn, sell the contents of the lost bags at discount prices. In case you don't feel like hopping on a plane to visit Alabama (what would you do with your luggage?), you can always visit their webpage and buy stuff online.
Posted by Mark on December 14, 2000 at 12:45 PM .: link :.
Tuesday, December 12, 2000
Pudding-Factory Disaster Brings Slow, Creamy Death to Town Below: This article ran a while ago, but I think its the funniest thing I have ever read over at The Onion. An exerpt: "Sweet, creamy death swept through this small Illinois town Monday... burying hundreds of residents in a rich, smooth tidal wave of horrifying pudding goodness." Priceless descriptions of the tragic, tragic horror of the delectably Choco-Licious death-pudding.
Posted by Mark on December 12, 2000 at 12:27 PM .: link :.
Monday, December 11, 2000
To find the perfect gift for those hopeless people in your life, go to Despair, Inc., a company that sells demotivational posters similar to the popular motivational posters found in most business settings. My favourite demotivators:
Posted by Mark on December 11, 2000 at 12:38 PM .: link :.
Friday, December 08, 2000
Lawyer Wants To Bar Christmas as Federal Holiday: This grinch has been trying to steal Christmas for almost 3 years now, with the argument that having Christmas as a federal holiday is a violation of Church and state. "the Christmas holiday amounts to a government approval for a day of Christian religious origins marking the birth of Jesus Christ" This guy obviously doesn't know much about the History of Christmas, which has its origins in pagan rituals that were later adopted by Christianity to celebrate the birth of Christ. In my opinion, Christmas is such a wonderous holiday because of its secular aspects, including holly, ivy, Mistletoe, Christmas trees, Santa Claus, snowmen, jingling bells and presents on Christmas morning (which have been repeatedly recognized by US Courts). Furthermore, this is a season who's very message transcends any specific religion, ideology, or tradition to become an occasion for collective reflection on the values of what brings us together. Lets just hope the Courts stand firm...
Posted by Mark on December 08, 2000 at 09:30 AM .: link :.
Tuesday, December 05, 2000
This java applet attempts to implement the classic "Eliza" program. It pretends to be a Rogerian psychologist. It was groundbreaking in its time, but it is ultimately a lacking AI system (that or Rogerian psychologists are complete morons, which is probably not too far from the truth). Its pretty easy to take advantage of the system. As DyRE found out, Never Go to a Rogerian Psychologist When You're On Fire.
Posted by Mark on December 05, 2000 at 08:52 AM .: link :.
Thursday, November 30, 2000
Some recent headlines (no, they are not from the onion, but they probably should be):
Posted by Mark on November 30, 2000 at 12:49 PM .: link :.
Tuesday, October 31, 2000
Some interesting happenings in the world of Exorcism. In a recent study that highlights the bendability of memory and perception, psychologists were able to convince normally skeptical people that they had experienced a possession at some point in their life. As if marking the occasion, the 1973 classic film, The Exorcist was recently re-released, and pyschologists expect a rash of new possessions. Also, it seems the old rite of exorcism is gaining new respect. I read the book by William Peter Blatty a while back, and was suprised at just how detailed the psychological aspect of the story was. By the end of the book, I still was unsure of whether or not the possession was caused by psychological influences or some supernatural power. In fact, the rite of exorcism was shown to be a very scientific method and I was duly impressed with the novel's objective study. However, the book does not quite capture the pea-soup-projectile-vomit themes too well :-)
Posted by Mark on October 31, 2000 at 06:08 PM .: link :.
Friday, October 27, 2000
Terror Behind the Walls
I recently visited Eastern State Penitentiary's haunted house, Terror Behind the Walls. It was a pretty good haunted house; my only complaint is that there were way too many people walking through with me (thus I saw many of the people in front of me get scared). The creepiest part, however, was simply walking down the dark corridors of the old, decaying site, looking into the cells and seeing only darkness. At the end of the tour, there was a small museum showing the far more interesting history of the old penitentiary.
Eastern State Penitentiary was built in the 1820s under the Quaker philosophy of reform through solitude and reflection, and has held the likes of Al Capone and Willie Sutton. Covering around 11 acres in Philadelphia, it has become a Historic Site. From the moment he arrived until the moment he left, the prisoner would see no one. The furniture of the 8x12 cell consisted of a mattress and a bible. "...Silence, solitude, the bible, never a moment of human contact, never a voice heard at a distance, the dead world of a living tomb..." In the end, the solitary confinement of Eastern State ended up driving most of its inmates insane, until 1903 when the idea of complete isolation was abandoned. By the time Eastern State was closed in 1971, it had become just another old, crowded prison with the usual share of brutality, riots, hunger strikes, escapes, suicides, and scandals. I think a regular guided tour and commentary would be scarier than the haunted house was...
Posted by Mark on October 27, 2000 at 10:34 AM .: link :.
Friday, October 06, 2000
Light the Lamp
Lets go Flyers! Hockey season doth rock, and the Flyers won their season opener 6-3. Young Justin Williams looked mighty impressive, but rookies have a way of starting strong and dropping off fast. The Flyers themselves looked ok, but they were still making a bunch of stupid mistakes that could have cost them the game. I predict that they will get clobbered by Boston on Saturday (by a score of 5-1).
Posted by Mark on October 06, 2000 at 09:14 AM .: link :.
Tuesday, September 19, 2000
Bert is Evil!
Hold on to your crackpipes, kiddies, its time for a piece of classic web trash: Bert is Evil!. One of the funniest things I have ever seen on the web.
Posted by Mark on September 19, 2000 at 01:37 PM .: link :.
Where am I?
This page contains entries posted to the Kaedrin Weblog in the Culture Category.
Kaedrin Beer Blog
12 Days of Christmas
2006 Movie Awards
2007 Movie Awards
2008 Movie Awards
2009 Movie Awards
2010 Movie Awards
2011 Fantastic Fest
2011 Movie Awards
2012 Movie Awards
6 Weeks of Halloween
Arts & Letters
Computers & Internet
Disgruntled, Freakish Reflections
Philadelphia Film Festival 2006
Philadelphia Film Festival 2008
Philadelphia Film Festival 2009
Philadelphia Film Festival 2010
Science & Technology
Security & Intelligence
The Dark Tower
Weird Movie of the Week
Copyright © 1999 - 2012 by Mark Ciocco.