I actually had this at a beer club outing last year, and I loved it so much that I went out and bought a bottle. It was a big, rich imperial stout mixed with typical pumpkin pie flavors, but very well balanced. Or was it? This was a beer that really shined in the beer club setting, where I was only trying a few ounces, if that. And as pumpkin beers go, this was the first time I'd had a pumpkin stout, a combination of flavors that was surprisingly good. But maybe I've fallen prey to a classic market research problem. Fair warning, serious nerdery ahoy. Feel free to skip to the review below.
Remember the embarrassment that was New Coke? Longtime readers know I'm a huge fan of Coke and I really freakin hate Pepsi. Why did Coke reformulate their time-honored, classic formula? Well, Coke had been losing ground to Pepsi, and then this classic ad campaign came out: The Pepsi Challenge. Basically, Pepsi went out and asked a bunch of loyal Coke drinkers to take a sip from two glasses and pick which one was better. The participants preferred Pepsi by a rather large margin. Coke disputed the results until they started running their own internal sip tests... and got pretty much the same results. So they started fiddling with their fabled formula, making it sweeter and lighter (i.e. more like Pepsi). Eventually, they settled on a formula that consistently outperformed Pepsi in the challenge, and thus New Coke was born.
Of course, we all know what happened. New Coke was a disaster. Coke drinkers were outraged, the company's sales plunged, and Coke was forced to bring back the original formula as "Classic Coke" just a few months later (at which point New Coke practically disappeared). What's more, Pepsi's seemingly unstoppable ascendance never materialized. When it comes to the base cola brand, people still prefer Coke to Pepsi, sip tests be damned! So what's going on here? Why do people buy Coke when sip tests show that they like Pepsi better? Malcolm Gladwell wrote about why in his book Blink:
The difficulty with interpreting the Pepsi Challenge findings begins with the fact that they were based on what the industry calls a sip test or a CLT (central location test). Tasters don't drink the entire can. They take a sip from a cup of each of the brands being tested and then make their choice. Now suppose I were to ask you to test a soft drink a little differently. What if you were to take a case of the drink home and tell me what you think after a few weeks? Would that change your opinion? It turns out it would. Carol Dollard, who worked for Pepsi for many years in new-product development, says, "I've seen many times when the CLT will give you one result and the home-use test will give you the exact opposite. For example, in a CLT, consumers might taste three or four different products in a row, taking a sip or a couple sips of each. A sip is very different from sitting and drinking a whole beverage on your own. Sometimes a sip tastes good and a whole bottle doesn't. That's why home-use tests give you the best information. The user isn't in an artificial setting. They are at home, sitting in front of the TV, and the way they feel in that situation is the most reflective of how they will behave when the product hits the market."
Dollard says, for instance, that one of the biases in a sip test is toward sweetness: "If you only test in a sip test, consumers will like the sweeter product. But when they have to drink a whole bottle or can, that sweetness can get really overpowering or cloying." Pepsi is sweeter than Coke, so right away it had a big advantage in a sip test. Pepsi is also characterized by a citrusy flavor burst, unlike the more raisiny-vanilla taste of Coke. But that burst tends to dissipate over the course of an entire can, and that is another reason Coke suffered by comparison. Pepsi, in short, is a drink built to shine in a sip test. Does this mean that the Pepsi Challenge was a fraud? Not at all. It just means that we have two different reactions to colas. We have one reaction after taking a sip, and we have another reaction after drinking a whole can.
The parallel here is obvious. Drinking a small dose of a beer in the context of beer club (where I'm sampling a whole bunch of beers) can lead to some distortion in ratings. I usually mention this bias in my beer club posts, but despite my usual snark when bringing it up, I do think those ratings are a bit suspect.
In this particular case, the beer did not fare quite as well upon revisiting it in a more controlled environment (sheesh, I'm a nerd), though I suppose the fact that I aged this beer a year or so also has something to do with it. Yet more distortion! I suspect a fresh bottle would have more of that rich, chewy stout character and a more biting spice presence, whereas this aged bottle showed a lot more pumpkin and less in the way of stoutness. Spicing was clearly still strong, but not quite as bright as they seemed last year. Ok fine, I admit it, all of my tasting notes are unreliable. I hope your happy. Anywho, here's my notes:
Cape Ann Fisherman's Imperial Pumpkin Stout (2011 Vintage) - Pours a black color with minimal, rather light colored head. Smells very sweet, with a huge pumpkin pie component, with both pumpkin and spice asserting themselves. Taste is again very sweet, with some caramel flavors, a little in the way of chocolate, and even a little roastiness, but those pumpkin pie notes are here too, and they get stronger the more I drink. This seems to have lost some of its punch from last year, though it's still big and flavorful stuff. Mouthfeel is rich and creamy, a little spicy, but this doesn't drink like an 11% beer. Definitely not as great as I remembered, but still very solid and overall, as pumpkin beers go, this is still one of my favorites. B+
Beer Nerd Details: 11% ABV bottled (22 oz. bomber). Drank out of a snifter on 9/29/12. 2011 vintage.
Yeah, so maybe I'll try to find a fresh bottle of this stuff and see if it fares any better. I've actually never had any of Cape Ann's other Fisherman's beers, so I should probably get on that too...