Sunday, June 28, 2009
Interrupts and Context Switching
To drastically simplify how computers work, you could say that computers do nothing more that shuffle bits (i.e. 1s and 0s) around. All computer data is based on these binary digits, which are represented in computers as voltages (5 V for a 1 and 0 V for a 0), and these voltages are physically manipulated through transistors, circuits, etc... When you get into the guts of a computer and start looking at how they work, it seems amazing how many operations it takes to do something simple, like addition or multiplication. Of course, computers have gotten a lot smaller and thus a lot faster, to the point where they can perform millions of these operations per second, so it still feels fast. The processor is performing these operations in a serial fashion - basically a single-file line of operations.
This single-file line could be quite inefficent and there are times when you want a computer to be processing many different things at once, rather than one thing at a time. For example, most computers rely on peripherals for input, but those peripherals are often much slower than the processor itself. For instance, when a program needs some data, it may have to read that data from the hard drive first. This may only take a few milliseconds, but the CPU would be idle during that time - quite inefficient. To improve efficiency, computers use multitasking. A CPU can still only be running one process at a time, but multitasking gets around that by scheduling which tasks will be running at any given time. The act of switching from one task to another is called Context Switching. Ironically, the act of context switching adds a fair amount of overhead to the computing process. To ensure that the original running program does not lose all its progress, the computer must first save the current state of the CPU in memory before switching to the new program. Later, when switching back to the original, the computer must load the state of the CPU from memory. Fortunately, this overhead is often offset by the efficiency gained with frequent context switches.
If you can do context switches frequently enough, the computer appears to be doing many things at once (even though the CPU is only processing a single task at any given time). Signaling the CPU to do a context switch is often accomplished with the use of a command called an Interrupt. For the most part, the computers we're all using are Interrupt driven, meaning that running processes are often interrupted by higher-priority requests, forcing context switches.
This might sound tedious to us, but computers are excellent at this sort of processing. They will do millions of operations per second, and generally have no problem switching from one program to the other and back again. The way software is written can be an issue, but the core functions of the computer described above happen in a very reliable way. Of course, there are physical limits to what can be done with serial computing - we can't change the speed of light or the size of atoms or a number of other physical constraints, and so performance cannot continue to improve indefinitely. The big challenge for computers in the near future will be to figure out how to use parallel computing as well as we now use serial computing. Hence all the talk about Multi-core processing (most commonly used with 2 or 4 cores).
Parallel computing can do many things which are far beyond our current technological capabilities. For a perfect example of this, look no further than the human brain. The neurons in our brain are incredibly slow when compared to computer processor speeds, yet we can rapidly do things which are far beyond the abilities of the biggest and most complex computers in existance. The reason for that is that there are truly massive numbers of neurons in our brain, and they're all operating in parallel. Furthermore, their configuration appears to be in flux, frequently changing and adapting to various stimuli. This part is key, as it's not so much the number of neurons we have as how they're organized that matters. In mammals, brain size roughly correlates with the size of the body. Big animals generally have larger brains than small animals, but that doesn't mean they're proportionally more intelligent. An elephant's brain is much larger than a human's brain, but they're obviously much less intelligent than humans.
Of course, we know very little about the details of how our brains work (and I'm not an expert), but it seems clear that brain size or neuron count are not as important as how neurons are organized and crosslinked. The human brain has a huge number of neurons (somewhere on the order of one hundred billion), and each individual neuron is connected to several thousand other neurons (leading to a total number of connections in the hundreds of trillions). Technically, neurons are "digital" in that if you were to take a snapshot of the brain at a given instant, each neuron would be either "on" or "off" (i.e. a 1 or a 0). However, neurons don't work like digital electronics. When a neuron fires, it doesn't just turn on, it pulses. What's more, each neuron is accepting input from and providing output to thousands of other neurons. Each connection has a different priority or weight, so that some connections are more powerful or influential than others. Again, these connections and their relative influence tends to be in flux, constantly changing to meet new needs.
This turns out to be a good thing in that it gives us the capability to be creative and solve problems, to be unpredictable - things humans cherish and that computers can't really do on their own.
However, this all comes with its own set of tradeoffs. With respect to this post, the most relevant of which is that humans aren't particularly good at doing context switches. Our brains are actually great at processing a lot of information in parallel. Much of it is subconscious - heart pumping, breathing, processing sensory input, etc... Those are also things that we never really cease doing (while we're alive, at least), so those resources are pretty much always in use. But because of the way our neurons are interconnected, sometimes those resources trigger other processing. For instance, if you see something familiar, that sensory input might trigger memories of childhood (or whatever).
In a computer, everything is happening in serial and thus it is easy to predict how various inputs will impact the system. What's more, when a computer stores its CPU's current state in memory, that state can be restored later with perfect accuracy. Because of the interconnected and parallel nature of the brain, doing this sort of context switching is much more difficult. Again, we know very little about how the humain brain really works, but it seems clear that there is short-term and long-term memory, and that the process of transferring data from short-term memory to long-term memory is lossy. A big part of what the brain does seems to be filtering data, determining what is important and what is not. For instance, studies have shown that people who do well on memory tests don't necessarily have a more effective memory system, they're just better at ignoring unimportant things. In any case, human memory is infamously unreliable, so doing a context switch introduces a lot of thrash in what you were originally doing because you will have to do a lot of duplicate work to get yourself back to your original state (something a computer has a much easier time doing). When you're working on something specific, you're dedicating a significant portion of your conscious brainpower towards that task. In otherwords, you're probably engaging millions if not billions of neurons in the task. When you consider that each of these is interconnected and working in parallel, you start to get an idea of how complex it would be to reconfigure the whole thing for a new task. In a computer, you need to ensure the current state of a single CPU is saved. Your brain, on the other hand, has a much tougher job, and its memory isn't quite as reliable as a computer's memory. I like to refer to this as metal inertia. This sort of issue manifests itself in many different ways.
One thing I've found is that it can be very difficult to get started on a project, but once I get going, it becomes much easier to remain focused and get a lot accomplished. But getting started can be a problem for me, and finding a few uninterrupted hours to delve into something can be difficult as well. One of my favorite essays on the subject was written by Joel Spolsky - its called Fire and Motion. A quick excerpt:
Many of my days go like this: (1) get into work (2) check email, read the web, etc. (3) decide that I might as well have lunch before getting to work (4) get back from lunch (5) check email, read the web, etc. (6) finally decide that I've got to get started (7) check email, read the web, etc. (8) decide again that I really have to get started (9) launch the damn editor and (10) write code nonstop until I don't realize that it's already 7:30 pm.I've found this sort of mental inertia to be quite common, and it turns out that there are several areas of study based around this concept. The state of thought where your brain is up to speed and humming along is often referred to as "flow" or being "in the zone." This is particularly important for working on things that require a lot of concentration and attention, such as computer programming or complex writing.
From my own personal experience a couple of years ago during a particularly demanding project, I found that my most productive hours were actually after 6 pm. Why? Because there were no interruptions or distractions, and a two hour chunk of uninterrupted time allowed me to get a lot of work done. Anecdotal evidence suggests that others have had similar experiences. Many people come into work very early in the hopes that they will be able to get more done because no one else is here (and complain when people are here that early). Indeed, a lot of productivity suggestions basically amount to carving out a large chunk of time and finding a quiet place to do your work.
A key component of flow is finding a large, uninterrupted chunk of time in which to work. It's also something that can be difficult to do here at a lot of workplaces. Mine is a 24/7 company, and the nature of our business requires frequent interruptions and thus many of us are in a near constant state of context switching. Between phone calls, emails, and instant messaging, we're sure to be interrupted many times an hour if we're constantly keeping up with them. What's more, some of those interruptions will be high priority and require immediate attention. Plus, many of us have large amounts of meetings on our calendars which only makes it more difficult to concentrate on something important.
Tell me if this sounds familiar: You wake up early and during your morning routine, you plan out what you need to get done at work today. Let's say you figure you can get 4 tasks done during the day. Then you arrive at work to find 3 voice messages and around a hundred emails and by the end of the day, you've accomplished about 15 tasks, none of which are the 4 you had originally planned to do. I think this happens more often than we care to admit.
Another example, if it's 2:40 pm and I know I have a meeting at 3 pm - should I start working on a task I know will take me 3 solid hours or so to complete? Probably not. I might be able to get started and make some progress, but as soon my brain starts firing on all cylinders, I'll have to stop working and head to the meeting. Even if I did get something accomplished during those 20 minutes, chances are when I get back to my desk to get started again, I'm going to have to refamiliarize myself with the project and what I had already done before proceeding.
Of course, none of what I'm saying here is especially new, but in today's world it can be useful to remind ourselves that we don't need to always be connected or constantly monitoring emails, RSS, facebook, twitter, etc... Those things are excellent ways to keep in touch with friends or stay on top of a given topic, but they tend to split attention in many different directions. It's funny, when you look at a lot of attempts to increase productivity, efforts tend to focus on managing time. While important, we might also want to spend some time figuring out how we manage our attention (and the things that interrupt it).
(Note: As long and ponderous as this post is, it's actually part of a larger series of posts I have planned. Some parts of the series will not be posted here, as they will be tailored towards the specifics of my workplace, but in the interest of arranging my interests in parallel (and because I don't have that much time at work dedicated to blogging on our intranet), I've decided to publish what I can here. Also, given the nature of this post, it makes sense to pursue interests in my personal life that could be repurposed in my professional life (and vice/versa).)
Posted by Mark on June 28, 2009 at 03:44 PM .: link :.
Wednesday, June 24, 2009
David Foster Wallace's mammoth novel Infinite Jest has been sitting on my shelf, unread, for at leat 5 years. I have noted on frequent occassions that it's a book that I should probably read at some point, but for various reasons, I could never find a time that felt right to read it. I'm not intimidated by its size. My favorite author is Neal Stephenson, and that guy hasn't written a novel shorter than 900 pages since the mid-90s (including the 3 part, 2700 page Baroque Cycle). To me, the problem was always that this novel seemed to be one of those post-modern exercises in literary style and cleverness, and my tolerance for such wankery had waned after reading the hugely complex and impenetrable Gravity's Rainbow (a book I like, to be sure, but that also made me want to chill out for a while). I'm generally a story-is-king kinda guy, so books that focus on exploring language and narrative style ahead of story and plot tend to grate on me unless they're really well done. It's not that such books are bad or that I can't enjoy them, it's just that I think it's a very difficult feat, and so whenever a new book of this style comes along, I have to wonder whether it's worth the trouble.
So the book has sat on my shelf, unread. In the wake of the author's untimely death last year, it seems that some fans have taken it upon themselves to encourage people to read Wallace's masterpiece. Their challenge:
Join endurance bibliophiles from around the world in reading Infinite Jest over the summer of 2009, June 21st to September 22nd. A thousand pages1 ÷ 92 days = 75 pages a week. No sweat.They're calling it Infinite Summer. Despite the strange mixture of measurement units in their equation (one would think the result would be in pages/day, but whatever), 75 pages a week does indeed sound like no sweat. And as luck would have it, I ran accross that site around the same time I was finishing up a book, and reading through some of the entries there finally made me interested enough to pick up the book and give it a shot.
I haven't read that much of it yet, but so far, I'm quite enjoying it. It's not nearly as pretentious as I feared, though it's obviously not beach or airport reading material either. It seems to rate somewhere between Cryptonomicon/Baroque Cycle and Gravity's Rainbow in terms of reading difficulty, though this may need some revision as I get further into the novel. When I read novels like this, there is a part of me that wants to stop everytime I find something I don't know about and figure that out before continuing. I read Gravity's Rainbow in that way, and there were times where it would take me an hour to read a single page. But after reading Jason Kottke's forward, I think I'm just going to relax this time around:
...you don’t need to be an expert in much of anything to read and enjoy this novel. It isn’t just for English majors or people who love fiction or tennis players or recovering drug addicts or those with astronomical IQs. Don’t sweat all the Hamlet stuff; you can worry about those references on the second time through if you actually like it enough to read it a second time. Leave your dictionary at home; let Wallace’s grammatical gymnastics and extensive vocabulary wash right over you; you’ll get the gist and the gist is more than enough. Is the novel postmodern or not? Who f’ing cares the story stands on its own.And thus I've begun my nfinite Summer...
Posted by Mark on June 24, 2009 at 05:42 PM .: link :.
Sunday, June 21, 2009
Wii Game Corner (again)
Some quick reviews for Wii games I've played recently:
Posted by Mark on June 21, 2009 at 09:12 PM .: link :.
Wednesday, June 17, 2009
The Motion Control Sip Test
A few weeks ago, Microsoft and Sony unveiled rival motion control systems, presumably in response to Nintendo's dominant market position. The Wii has sold much better than both the Xbox 360 and the PS3 (to the point where sales of Xbox and PS3 combined are around the same as the Wii), so I suppose it's only natural for the competition to adapt. To be honest, I'm not sure how wise that would be... or rather, I'm not sure Sony and Microsoft are imitating the right things. Microsoft's Project Natal seems quite ambitious in that it relies completely on gestures and voice (no controllers!). The Sony motion control system, which relies on a camera and two handheld wands, seems somewhat similar to the Wii in that there are still controllers and buttons. Incidentally, the Wii actually released Wii Motion Plus, an improvement to their already dominant system.
My first thought at a way to compete with the Wii would have been along similar lines, but not for the reasons I suspect Microsoft and Sony released their solutions. The problem for MS & Sony is that the Wii is the unquestionable winner of this generation of gaming consoles, and everyone knows that. A third party video game developer can create a game for a console with an install base of 20 million (the PS3), 30 million (Xbox) or 50 million (Wii). Since the PS3 and Xbox have similar controllers, 3rd parties can often release games on both consoles, though there is overhead in porting your code to both systems. This gives a rough parity between those two systems and the Wii... until you realize that developing games for the Xbox/PS3 means HD and that means those games will be much more costly (in both time and money) to develop. On the other hand, you could reach the same size audience by developing a game for the Wii, using standard definition (which is much easier to develop for) and not having to worry about compatibility issues between two consoles.
The problem with Natal and Sony's Wands is that they basically represent brand new consoles. This totally negates the third party advantage of releasing a game on both platforms. Now a third party developer who wants to create a motion control game is forced to choose between two underperforming platforms and one undisputed leader in the field. How do you think that's going to go?
Microsoft's system seems to be the most interesting in that they're trying something much different than Nintendo or Sony. But "interesting" doesn't necessarily translate into successful, and from what I've read, Natal is a long ways away from production quality. Yeah, the marketing video they created is pretty neat, but from what I can tell, it doesn't quite work that well yet. Even MS execs are saying that what's in the video is "conceptual" and what they "hope" to have at launch. If they launch it at all. I'd be surprised if what we're seeing is ever truly launched. Yeah, the Minority Report interface (which is basically what Natal is) really looks cool, but I have my doubts about how easy it will be to actually use. Won't your arms get tired? Why use motion gestures for something that is so much easier and more precise with a mouse?
Sony's system seems to be less ambitious, but also too different from Nintendo's Wiimote. If I were at Sony, I would have tried to duplicate the Wiimote almost exactly. Why? Because then you give 3rd party developers the option of developing for Wii then porting to PS3, thus enlarging the pie from 50 million to 70 million with minimal effort. Sure the graphics wouldn't be as impressive as other PS3 efforts, but as the Wii has amply demonstrated, you don't need unbelievable graphics to be successful. The PS3 would probably need a way to upscale the SD graphics to ensure they don't look horrible, but that should be easy enough. I'm sure there would be some sort of legal issue with that idea, but I'm also sure Sony could weasel their way out of any such troubles. To be clear, this strategy wouldn't have a chance at cutting into Wii sales - it's more of a holding pattern, a way to stop the bleeding (it might help them compete with MS though). Theoretically, Sony's system isn't done yet either and could be made into something that could get Wii ports, but somehow I'm doubting that will actually be in the works.
The big problem with both Sony and Microsoft's answer to the Wiimote is that they've completely misjudged what made the Wii successful. It's not the Wiimote and motion controls, though that's part of it. It's that Nintendo courted everyone, not just video gamers. They courted grandmas and kids and "hardcore" gamers and "casual" gamers and everyone inbetween. They changed video games from solitary entertainment to something that is played in living rooms with families and friends. They moved into the Blue Ocean and disrupted the gaming industry. The unique control system was important, but I think that's because the control system was a signfier that the Wii was for everyone. The fact that it was simple and intuitive was more important than motion controls. The most important part of the process wasn't motion controls, but rather Wii Sports. Yes, Wii Sports uses motion controls, and it uses them exceptionally well. It's also extremely simple and easy to use and it was targeted towards everyone. It was a lot of fun to pop in Wii Sports and play some short games with your friends or family (or coworkers or enemies or strangers off the street or whoever).
The big problem for me is that even Nintendo hasn't improved on motion controls much since then. It's been 3 years since Wii Sports, and yet it's still probably the best example of motion controls in action. I have not played any Wii Motion Plus games yet, so for me, the jury is still out on that one. However, I'm not that interested in playing the games I'm seeing for Motion Plus, let alone the prospect of paying for yet another peripheral for my Wii (though it does seem to be cheap). The other successful games for the Wii weren't so much successful for their motion controls so much as other, intangible factors. Mario Kart is successful... because it's always successful (incidentally, while I still enjoy playing with friends every now and again, the motion controls have nothing to do with that - it's more just the nostagia I have for the original Mario Kart). Wii Fit has been an amazing success story for Nintendo, but it introduced a completely new peripheral and its success is probably more due to the fact that Nintendo was targeting more than just the core gamer audience with software that broadened what was possible on a video game console. Again, Nintendo's success is due to their strategy of creating new customers and their marketing campaigns that follow the same strategy. Wii has a lot of games that have less than imaginitive motion controls - games which simply replace random button mashing with random stick waggling. But where they're most successful seems to be where they target a broader audience. They also seem to be quite adept at playing on people's nostalgia, hence I find myself playing new Mario, Zelda, and Metroid games, even when I don't like some of them (I'm looking at you, Metroid Prime 3!)
Motion controls play a part in this, but they're the least important part. Why? Because the same complaints I have for Natal and the Minority Report interface apply to the Wii (or the new PS3 system, for that matter). For example, take Metroid Prime 3. A FPS for the Wii! Watch how motion controls will revolutionize FPS! Well, not so much. There are a lot of reasons I don't like the game, but one of the reasons was that you constantly had to have your Wiimote pointed up. If your hand strayed or you wanted to rest your wrists for a moment, your POV also strays. There are probably some other ways to do FPS on the Wii, but I'm not especially convinced (The Conduit looks promising, I guess) that a true FPS game will work that well on a Wii (heck, it doesn't work that well on a PS3 or Xbox when compared to the PC). That's probably why Rail Shooters have been much more successful on the Wii.
Part of the issue I have is that motion controls are great for short periods of time, but even when you're playing a great motion control game like Wii Sports, playing for long periods of time has adverse affects (Wii elbow anyone?). Maybe that's a good thing; maybe gamers shouldn't spend so much time playing video games... but personally, I enjoy a nice marathon session every now and again.
You know what this reminds me of? New Coke. Seriously. Why did Coca-Cola change their time-honored and fabled secred formula? Because of the Pepsi Challenge. In the early 1980s, Coke was losing ground to Pepsi. Coke had long been the most popular soft drink, so they were quite concerned about their diminishing lead. Pepsi was growing closer to parity every day, and that's when they started running these commercials pitting Coke vs. Pepsi. The Pepsi Challenge took dedicated Coke drinkers and asked them to take a sip from two different glasses, one labeled Q and one labeled M. Invariably, people chose the M glass, which was revealed to contain Pepsi. Coke initially disputed the results... until they started private running sip tests of their own. It turns out that people really did prefer Pepsi (hard as that may be for those of us who love Coke!). So Coke started tinkering with their secret formula, attempting to make it lighter and sweeter (i.e. more like Pepsi). Eventually, they got to a point where their new formulation consistently outperformed Pepsi in sip tests, and thus New Coke was born. Of course, we all know what happened. New Coke was a disaster. Coke drinkers were outraged, the company's sales plunged, and Coke was forced to bring back the original formula as "Classic Coke" just a few months later (at which point New Coke practically disappeared). What's more, Pepsi's seemingly unstoppable ascendance never materialized. For the past 20-30 years, Coke has beaten Pepsi despite sip tests which say that it should be the other way around. What was going on here? Malcolm Gladwell explains this incident and the aftermath in his book Blink:
The difficulty with interpreting the Pepsi Challenge findings begins with the fact that they were based on what the industry calls a sip test or a CLT (central location test). Tasters don’t drink the entire can. They take a sip from a cup of each of the brands being tested and then make their choice. Now suppose I were to ask you to test a soft drink a little differently. What if you were to take a case of the drink home and tell me what you think after a few weeks? Would that change your opinion? It turns out it would. Carol Dollard, who worked for Pepsi for many years in new-product development, says, “I’ve seen many times when the CLT will give you one result and the home-use test will give you the exact opposite. For example, in a CLT, consumers might taste three or four different products in a row, taking a sip or a couple sips of each. A sip is very different from sitting and drinking a whole beverage on your own. Sometimes a sip tastes good and a whole bottle doesn’t. That’s why home-use tests give you the best information. The user isn’t in an artificial setting. They are at home, sitting in front of the TV, and the way they feel in that situation is the most reflective of how they will behave when the product hits the market.”To me, motion controls seem like a video game sip test. The analogy isn't perfect, because I think that motion controls are here to stay, but I think the idea is relevant. Coke is like Sony - they look at a successful competitor and completely misjudge what made them successful. Yes, motion controls are a part of the Wii's success, but their true success lies elsewhere. In small doses and optimized for certain games (like bowling or tennis), nothing can beat motion controls. In larger doses with other types of games, motion controls have a long ways to go (and they make my arm sore). Microsoft and Sony certainly don't seem to be abandoning their standard controllers, and even the Wii has a "Classic Controller", and I think that's about right. Motion controls have secured a place in gaming going forward, but I don't see it completely displacing good old-fashioned button mashing either.
Update: Incidentally, I forgot to mention the best motion control game I've played since Wii Sports has been... Flower, for the PS3. Flower is also probably a good example of a game that makes excellent use of motion controls, but hasn't achieved anywhere near the success of Nintendo's games. It's not because it isn't a good game (it is most definitely an excellent game, and the motion controls are great), it's because it doesn't expand the audience the way Nintendo does. If Natal and Sony's new system do make it to market, and if they do manage to release good games (and those are two big "ifs"), I suspect it won't matter much...
Posted by Mark on June 17, 2009 at 06:40 PM .: link :.
Sunday, June 14, 2009
Burial Ground: The Nights of Terror
This month's pick for the Final Girl Film Club is an Italian zombie flick called Burial Ground: The Nights of Terror (aka Zombie 3). Those Italians sure do love their zombies, but I have to admit that it's a subgenre I've never really gotten into... Unfortunately, this film does little to change my mind. It's pretty much your standard zombie fare - a group of people gather at some Professor's mansion in the country (not sure how a professor could afford such a swanky place to live, but hey, it's a zombie movie, why get bogged down in details), only to find that the professor has accidentally awoken the dead, who proceed to shuffle slowly towards our heroes in the typical zombie fashion. This being a bad horror movie, many characters go wandering off on their own so that they can succumb to the undead masses. I suppose I should mention that there are some minor spoilers in the below, but that really doesn't matter that much in a movie like this, does it?
The movie is pretty craptacular, but the filmmakers also knew where their bread was buttered and hit the zombie movie sweet spots well enough. Instead of spending what little money they had on things like actors and story, they appear to have blown everything on their special effects and makeup, to reasonably good effect. Aside from similar clothing, these zombies don't all look the same or have the same makup - each one has a somewhat distinct look, varying in stages of decomposition. Being a zombie flick, there is no personality to any of them, only to the mob. There's some pretty effective gore here, but by the third or fourth time you see a group of zombies squishing around some unlucky character's entrails, it gets to be a bit boring. The acting is horrible, of course, and we never really get to know most of the characters, but we do get to know the female characters' bodies pretty well (not good actresses, but they look pretty good onscreen).
Again, pretty standard fare for a zombie flick. At first, I was a little confused at how this movie had achieved such a high cult-film status. Then this little fella struts onscreen:
The character of "young" Michael is best thing in the movie, and he is definitely why this movie has attained cult status. You see, the character is supposed to be a 12 year old boy with a serious Oedipal complex. Apparently, Italian law prohibited actual children from being in schlock-fests like this, so the filmmakers had to try and find an adult who looked like a child. Somehow, they settled on 26 year old Peter Bark, who is quite small, but looks a lot older than 26 (let alone 12). Strangely, even the voice actor they got to do the dubbing on the English version sounds like a grown man imitating a child. Anyway, this character steals the show. He's actually not onscreen for a good portion of the movie, but when he is, he's awesome. And the climactic payoff of his bizarre Oedipal complex is indeed disgusting and depraved and surely the reason this film has any following at all today. I suppose a groan-inducing "I can't believe they went there" ending is better than many zombie movies manage, but still...
Aside from the unintentional comedy such a film offers, it didn't really do much for me. Zombie fanatics will surely love the experience, but I left the film with a resounding "meh." The whole Oedipal subplot certainly sets this movie apart from the shuffling mob of other zombie movies, but I don't find that particularly impressive either... Some nice gore, nudity, and unintentional comedy, but otherwise nothing special. **
Lots more screenshots and comments in the extended entry...
This is the aforementioned professor who invites the group of people to his swanky mansion just before heading over to the burial ground (how convenient that this professor is living in a mansion that is a quick walk away from some gigantic tomb). Here, he is reeling back in shock at seeing a zombie. Strangly, he also drops the little pickaxe, leaving himself defenseless (of course, he's also the one who tries to reason with the zombies, explaining that "I'm your friend!" right before he gets eaten).
Here's another of the film's zombies that showcases the standard green-mumu and neckerchief uniform of the zombies (the neckerchief was presumably used to hide the edges of the zombie masks, which I admit are pretty cool).
One would think that the pitcfork would make a reasonable weapon against the undead, but not the way this guy wields it. I don't think he even gets to use it... the first zombie that (slowly) approaches him manages to (slowly) grab the end of it and (slowly) wrest it from him. Perhaps the zombies posess super strength.
There are only ever about 5-10 zombies onscreen at any given time, but the film does give a pretty good implication that there are tons of zombies with shots like this, where you only see a few zombies, but you get the impression of a giant zombie horde...
One of the other strange things about the zombies in this movie are that they have enough intelligence to use rudimentary tools and set traps for our unwitting protagonists. And they can apparently throw giant spikes with remarkable accuracy, pinning victims to a wall so that they can (slowly) use a scythe to cut off their head.
One of the many times a character wanders off on their own so that they can become victims. It's actually a nice shot.
This is one of my favorite Michael moments. He's exporing the basement with his mother and her friend (boyfriend? Not really sure what the deal is with Michael's father - perhaps he was conceived by midi-chlorians) when Michael finds this patch on the floor and sniffs it. Then he runs over and says "Mother... This cloth smells of death." and his mother just laughs it off, completely ignoring the creepy factor (which is only enhanced by the already creepy look of Michael). Unintentionally hilarious.
These shots are actually our first introduction to Michael. His mother opens the door to check on him and he is sound asleep. Then the camera zooms in on his disembodied head as he opens his eyes wide.
This is actually the closing screen of the film. Things are looking pretty bleak for the final surviving character when suddenly there's a freeze frame and these words appear on the screen. I have no idea what the hell this Black Spider is, or why it's making a "Profecy" or why it can't spell trivial words like "Prophecy" or "nights". Is this supposed to mean something to the audience? Or is it supposed to be lending a sorta faux-creepy credibility to the proceedings (er, preceedings?) Either way, it certainly contributes to the film's unintentional humor quotient, so I actually kinda liked it.
Well, there you have it! Near as I can tell, this isn't really at the top of the Italian Zombie sub-subgenre, but I guess it kept my interest long enough... For once, I'm actually ahead of the game and posted this several weeks early - but there will be lots more posted at Stacy's site in early July.
Posted by Mark on June 14, 2009 at 08:22 PM .: link :.
Wednesday, June 10, 2009
When I write about movies or anime, I like to include screenshots. Heck, half the fun of the Friday the 13th marathon has been the screenshots. However, I've been doing this manually and it's become somewhat time intensive... So I've been looking for ways to make the process of creating the screenshots easier. I was going to write a post about a zombie movie tonight and I had about 15 screenshots I wanted to use...
I take screenshots using PowerDVD, which produces .bmp files. To create a screenshot for a post, I will typically crop out any unsightly black borders (they're ugly and often asymmetrical), convert to .jpg and rename the file. Then I will create a smaller version (typically 320 pixels, while maintaining the aspect ratio), using a variant of the original .jpg's filename. This smaller version is what you see in my post, while the larger one is what you see when you click on the image in my post.
I've always used GIMP to accomplish this, but it's a pretty manual process, so I started looking around for some batch image processing programs. There are tons of the things out there. I found several promising programs. Batch Image Resizer was pretty awesome and did exactly what I wanted, but the free trial version inserted a huge unwanted watermark that essentially rendered the output useless. I looked at a few other free apps, but they didn't meet some of my needs.
Eventually, I came accross the open source Phatch, which looked like it would provide everything I needed. The only issue was the installation process. It turns out that Phatch was written in Python, so in addition to Phatch, you also need to download and install Python, wxPython, Python Imaging Library and the Python Win32 Extensions. What's more is that the Phatch documentation has not taken into account that new versions of all of those are available and not all of them are compatible with each other. After a false start, I managed to download and install all the necessary stuff. Then, to run the application, I have to use the goddamned command line. Yeah, I know windows users don't get much support from the linux community, but this is kinda ridiculous.
But I got it all working and now I was on my way. As I've come to expect from open source apps, Phatch has a different way of setting up your image processing than most of the other apps I'd seen... but I was able to figure it out relatively quickly. According to the Phatch documentation, the Crop action looked pretty easy to use... the only problem was that when I ran Phatch, Crop did not appear to be on the list of actions. Confused, I looked around the documentation some more and it appeared that there were several other actions that could be used to crop images. For example, if I used the Canvas action, I could technically crop the image by specifying measurements smaller than the image itself - this is how I eventually accomplished the feat of converting several screenshots from their raw form to their edited versions. Here's an example of the zombietastic results (for reference, a .jpg of the original):
Bonus points to anyone who can name the movie!
The process has been frustrating and it took me a while to get all of this done. At this point, I have to wonder if I'd have been better off just purchasing that first app I found... and then I would have been done with it (and probably wouldn't be posting this at all). I'm hardly an expert on the subject of batch image manipulation and maybe I'm missing something fairly obvious, but I have to wonder why Phatch is so difficult to download, install, and use. I like open source applications and use several of them regularly, but sometimes they make things a lot harder than they need to be.
Update: I just found David's Batch Processor (a plugin for GIMP), but its renaming functionality is horrible (you can't actually rename the images - but you can add a prefix or suffix to the original filename.) Otherwise, it's decent.
And I also found FastStone Photo Resizer, which does everything I need it to do, and I don't need to run it from the command line either. This is what I'll probably be using in the future...
Update II: I got an email from Stani, who works on Phatch and was none to pleased about the post. It seems he had trouble posting a comment here (d'oh - second person this week who mentioned that, which is strange as it seems to have been working fine for the past few months and I haven't changed anything...). Anyway, here are his responses to the above:
As your comment system doesn't work, I post it through email. Considering the rant of your blog post, I would appreciate if you publish it as a comment for: http://kaedrin.com/weblog/archive/001652.htmlAnd my response:
Apologies if my ranting wasn't stimulating enough, but considering that it took a couple of hours to get everything working and that I value my time, I wasn't exactly enthused with the application or the documentation. Believe it or not, I did click on the "edit" link the wiki with the intention of adding some notes about the updated version numbers, but it said I had to be registered and I was already pretty fed up and not in the mood to sign up for anything. I admit that I neglected to do my part, but I got into this to save time and it ended up being an enormous time-sink. If I get a chance, I'll take another look.Update III: Ben over at Midnight Tease has been having fun with Open Source as well...
Posted by Mark on June 10, 2009 at 09:54 PM .: link :.
Sunday, June 07, 2009
A Decade of Kaedrin
It's hard to believe, but it has been ten years since I started this website. The exact date is a bit hard to pinpoint, as the site was launched on my student account at Villanova, which existed and was accessible on the web as far back as 1997. However, as near as I can tell, the site now known as Kaedrin began in earnest on May 31, 1999 at approximately 8 pm. That's when I wrote and published the first entry in The Rebel Fire Alarms, an interactive story written in tandem with my regular visitors. I called these efforts Tandem Stories, and it was my primary reason for creating the website. Other content was being published as well - mostly book, movie, and music reviews - but the primary focus was the tandem stories, because I wanted to do something different on an internet that was filled with boring, uninspired, static content homepages that were almost never updated. At the time, the only form of interaction you were likely to see on a given website was a forum of some kind, so I thought the tandem stories were something of a differentiator for my site, and it was, though I never really knew how many different people visited the site. As time went on, interactivity on the web, even of the interactive story variety, became more common, so that feature became less and less unique...
I did, however, have a regular core of visitors, most of whom knew me from the now defunct 4degreez message boards (which has since morphed into 4th Kingdom, which is still a vibrant community site). To my everlasting surprise and gratitude, several of these folks are still regular visitors and while most of what I do here is for my own benefit, I have to admit that I never would have gotten this far without them. So a big thank you to those who are still with me!
But I'm getting ahead of myself here. Below is a rough timeline of my website, starting with my irrelevant student account homepage (which was basically a default page with some personal details filled in), moving on to the first incarnation of Kaedrin, and progressing through several redesigns and technologies until you got the site you're looking at now (be forewarned, this gets to be pretty long, though it's worth noting that the site looked pretty much like it does today way back in 2001, so the bulk of redesigning happened in the 1999-2001 timeframe)...
Posted by Mark on June 07, 2009 at 09:38 AM .: link :.
Wednesday, June 03, 2009
Fallout 3 Thoughts
I've spent the past month or so playing through Fallout 3. I realize I'm a little late to the party, but here are some thoughts:
Posted by Mark on June 03, 2009 at 07:54 PM .: link :.
Where am I?
This page contains entries posted to the Kaedrin Weblog in June 2009.
Kaedrin Beer Blog
12 Days of Christmas
2006 Movie Awards
2007 Movie Awards
2008 Movie Awards
2009 Movie Awards
2010 Movie Awards
2011 Fantastic Fest
2011 Movie Awards
2012 Movie Awards
2013 Movie Awards
2014 Movie Awards
6 Weeks of Halloween
Arts & Letters
Computers & Internet
Disgruntled, Freakish Reflections
Philadelphia Film Festival 2006
Philadelphia Film Festival 2008
Philadelphia Film Festival 2009
Philadelphia Film Festival 2010
Science & Technology
Security & Intelligence
The Dark Tower
Weird Book of the Week
Weird Movie of the Week
Copyright © 1999 - 2012 by Mark Ciocco.