Sunday, May 24, 2009

I didn't think I could possibly hate the 100-point system any more....

First of all, it's not a 100-point system.

Here's the breakdown of what the points mean from Allen Meadows who, of all of the writers using the 100-point system, is one whose opinion I respect far more than many others....

95 - 100: Truly incomparable and emotionally thrilling. By definition, it is reference standard for its appellation. Less than 1% of fine wine (eg $30+ per bottle).

90 - 94: Outstanding. Worth a special effort to purchase and cellar and will provide memorable drinking experiences.

85 - 89: Very good to high quality. Wines that offer high quality, some flair and generally very good typicity. “Good Value” wines will often fall into this category. Worth your attention. 80 - 84: Average to Good quality. Fine wine, but solid rather than exciting.

70-79: Good wines, acceptable. But personally I find life too short to waste on boring wines

60-69: Not faulty, but plain and low quality fruit, e.g. dilute. Or crass winemaking, e.g. dolled up with oak chips. These points yet with no redeeming features.

Now, how exactly do we even consider this a 100-point system when only 40 of the points on the scale are used? And Meadows will actually score good wines under 90 points. A constant reminder to his readers that IT DOESN'T HAVE TO GET OVER 90 POINTS TO BE A GOOD WINE. I'm sorry if I seem a little aggrivated, it's just that I've recently come across 2 websites dedicated exclusively to wines that have scored over 90 points. I'm not going to link to them here, because I couldn't bear the thought that I might actually direct traffic to either of their sites. They're both built entirely around the premise that nothing that gets under 90 points is worth drinking. This is, in my humble opinion, and if you will please forgive my language, a massive pile of rancid horseshit.

As is evidenced by Meadows' point breakdown above, it's more of a 6-point system than a 100-point system, since the subtlety of difference between, say, a, 86 and an 88-point rating is basically subjective and meaningless, and here's why: Meadows rates the 2005 Joseph Drouhin Chambolle Musigny 89 points, and the 2005 Joseph Drouhin Gevrey Chambertin 88 points. By his own definition, that means they are both "Very good to high quality." But which one's better? Is he saying the Chambolle is a better Chambolle than the Gevrey is a Gevrey? Or just that the Chambolle is a better Burgundy overall? How accurately can you quantify that kind of a distinction, and how reliable can it possibly be? How many factors of circumstance, environment, and taster fatigue could have come into play the day Meadows tasted these 2 wines? Can you really make a objective distinction between an 86 point and an 88 point wine? And more importantly, do you need to? Can't we just agree that they're both very good wines? So far we're just talking about the differences between two similar scores from one writer, we haven't even touched on what it means when a wine gets scored by multiple publications. Check this shit out... The 2005 K Vintners The Boy got 92 points from Jay Miller of the Wine Advocate, and...wait for it....74 points from Harvey Steinman of the Wine Spectator. Can you say "What the fuck, Batman?" Ok, to be fair maybe Harvey got a bad bottle or something, so how about another example? The 2005 Penfolds Koonunga Hill gets 91 points from Robert Parker, and 85 points from Harvey Steinman. That means, according to their respective pubications' scoring definitions, while for Harvey, the wine just barely qualified as "Very good: a wine with special qualities," for Parker, the wine is "An outstanding wine of exceptional complexity and character." And while Harvey feels the wine should be consumed immediately, Parker contends that the wine can be cellared and consumed until 2018. This discrepancy in opinion reflects another major flaw in the 100-point system: why even bother all using the same rating system if there's no consistency? Presumably all of these writers intend for their scores to be used as a resource for the consumer, to help people make decisions on what wine to buy, but is it really any help whatsoever when one publication gives 91 points, and another 85? Wouldn't it be just as easy for consumers to understand if Parker just called it 'outstanding' and Harvey just called it 'very good'?

And last but not least, there's the 100-point wine. This has got to be one of the most obnoxious parts of the 100-point system. Are you really telling me there's such thing as a perfect wine? And more importantly, are you really saying there's a difference between a 99-point wine and a 100-point wine? Aren't 'not reccomended', 'mediocre', 'good', 'very good', 'outstanding', and 'classic' enough rating options?

Seriously.

No comments: