The 90-Point Rut

 

Looking back at the wines I’ve had over the last several months, I’ve clearly been in a 90-point rut: prior to this last weekend, I’ve given either 90 or 91 points to 10 of the last 16 wines I’ve had. Three of those remaining six received fewer than 90, and three more than 91. Some of these 90 or 91-point wines came from wineries I greatly respect: Waters and Baer in Washington State, Cameron and Bergstrom in Oregon, Melville in California. A Gigondas from Kermit Lynch’s Domaine les Pallieres was supremely disappointing. This past weekend, though, I devoured standout wines from Washington’s Reynvaan and Australia’s Torbreck, and they’ve solidly pulled me out of the 90-point rut. But it has me thinking: was the rut in the glass or in my head?

My gut tells me that when I’m on the fence about a wine, I default to 90 points. If it’s good but too expensive, do I take the easy way out and default to 90 points? If it’s solid but unremarkable, do I go straight to 90? One way or another, perhaps, I rationalize my way to 90 if the wine satisfies but doesn’t excite. It’s my comfort zone. It’s aesthetically pleasing. 90 is also safe in a crowd. Experienced winos can disagree with a 90 – maybe they’d go 89 or 91 – but they tend to respect it either way mostly because they just don’t get excited about it. 90 points can deflect attention, and sometimes that’s what we want.

It’s difficult to find out how many wines receive these scores by amateur or even professional reviewers. Two of the largest retail wine inventories online, K&L and wine.com, let you see all their wines with 90+ points, but aren’t able to show you only the 93-point wines, for example. Cellartracker.com doesn’t allow you to search by score, either. I went back to the report written by winecurmudgeon.com on the “winestream media bias” towards giving red wines higher scores than whites to see if they broke down their sample of over 50,000 professional reviews, and the answer was kind of. Still, it’s illuminating. The higher the score, the lower the quantity:

screen-shot-2016-12-07-at-9-11-11-am

So what does a 90 point review really mean? What should the consumer take from a 90-point review? After all, wine reviews are primarily for the consumer. These are complicated questions, but I think there are some simple ways to think about evaluating the bridgmanite of wine scores in the wine aisle.

First, a 90 score is low enough, and abundant enough, that within the context of wines reviewed by a particular source it’s unlikely to be a wine of distinction. If you’re looking for a uniquely expressive wine then you probably shouldn’t spend your dollars on a wine because it received 90 points.

Second, place the wine in the context of its category. Napa cabernet sauvignons aren’t cheap. There are some over-achieving bottles that start around $25, but most of the good stuff starts around the $50 price point, which is essentially where you also find bottles that hit a level of profile consistency that transcends vintage variation. So, if the 90-point bottle in question is a $50 Napa cab, it’s probably a well-executed version of the prototypical Napa cab lacking in particularities that would make it unique. Another good example are sauvignon blancs from New Zealand. These routinely start around $10 and few go above $25, and it’s hard to find any version at any price that goes into the mid 90s on the 100-point scale of any major reviewer. If you find a 90-point version for $12 you’ve probably found an over-achiever, and if you’re looking at a 90-point $20 bottle you’ve probably got an under-achiever.

You can also do this evaluation based on the grapes involved. For example, a Bordeaux(-style) blend. There are fantastic Bordeaux blends, from Bordeaux, for $20-25, as well as from other parts of the world. If you see a wine of this ilk for $55 with a 90-point score, then you should probably do a bit more research before deciding.

Third, a 90-point score means, like all scores, very little in the end. It comes down to what you like and what intrigues you.

Finally, refer back to the second paragraph of this post. 90 points is a safe place to go for a wine reviewer if, for whatever reason, they’re unsure or unmoved by the wine but recognize it meets the broad concept of “quality wine.” It’s my belief that wines that achieve more than their parts, wines whose profiles transcend the varietal or blend, earn the right to be considered exceptional. I can say with a high degree of confidence that no 90-point wine meets either of those conditions, and I say that both from a good amount of experience drinking wines and reading wine reviews. I cannot recall seeing adjectives like “special” or “brilliant” used in 90-point reviews. At the end of the day, unless the wine is of exceeding value, I can take or leave 90-point wines, though I’m still not sure whether they’re in my head or my glass.

Is there a “Winestream Media” Bias?

wine-critics-wine-searcher-comw-blake-gray

Credit: wine-seacher.com/ © Bob McClenahan/Stephen Tanzer/Nathaniel Welch; W. Blake Gray

On October 24th, the guys at Wine Curmudgeon released a study on whether American wine magazines were biased in favor of red wine. The anecdotal notion that red wines receive higher scores than white wines in these publications has been noticed for years, but this study does a much deeper dive into the data than anything I’ve seen. Their conclusion is the “winestream media” (great line) does indeed have a red wine bias because it gives far more 90+ point scores to red wines than whites. Unfortunately, though, the study’s methodology and data collection is not adequate to provide either (1) instructive data or (2) reliable analysis. As is said in the introduction, the data was provided on the condition of anonymity, which means that we know nothing about how it was collected, and further that it “was not originally collected with any goal of being a representative sample.” Therefore, the 14,885 white wine scores and 46,924 red wine scores don’t have context or relational relevance, and this hollows out any explanatory power the study could have had. Statisticians would say it was not statistically significant. The study is, however, quite interesting in the questions one can raise from it and I thank Wine Curmudgeon for that.

The central observation of the study, that more of the 90+ point wines are red than white, seems obvious to anyone who follows wine scores. This could be, as the study wonders, because we only know the scores that are reported, and publications are more likely to publish scores above 90. Further, “winemakers are likely to promote scores above 90.” This rationale seems likely, though it doesn’t tell us whether or why there is a red/white bias behind the scores. The study also wonders if this means red wines “are inherently better than white wines.” This is the question that got me thinking, though not in the direction the question would likely send someone.

I’ve noticed that reds tend to score better than whites, too, but then I’ve also generally scored reds higher than whites myself. Or so I thought until I looked at the wines I’ve reviewed on Cellartracker: 121 reds at an average score of 90.9 and 45 whites at an average score or 90.3. So, um? I’ve noticed that other wine drinkers, from the casual drinker to the expert, tend to show preferences for red wine as well, though there are exceptions. The best chardonnay from Burgundy and California (and increasingly Oregon), sauvignon blancs and semillions (and their blends) from Bordeaux, chenin blancs from parts of Loire, and reislings from Germany not only receive scores often well above 90, but come from regions where many of the reds produced in the same regions score well below what the best whites have achieved; that is to say, within certain regions the best whites and reds often both score well into the 90s. And because each region is often covered by the same critic, this observation would seem to suggest that something other than skin color plays a role in scoring.

This presents another question: can a critic who does not have a particular liking for one grape or blend give that grape or blend a high score based on factors like quality and complexity despite a disinclination towards that grape or blend? I don’t believe that they can. I’m just not a reisling person no matter how hard I try. I’ve had well-aged, super expensive reisling and I’ve had $18 bottles that I’m told are awesome values, and I can hardly tell the difference between the two. I like to think I have a good palate and am able to detect intricate nuances, but my taste buds don’t taste reisling’s notes well enough to discern between a “drink now” bottle and a cellar selection. And I imagine this is a very sad thing because reisling is supposed to be a wine collector’s mecca.

Another question, though a bit off topic: should the price-to-quality ratio, or “value,” be a variable in a wine’s score. I’ll use old school Rioja as an example. If you read my post on my most memorable reds, you’ll notice that it includes a ~$40 leathery Lopez de Heredia that I really enjoyed and scored 92 points, the lowest score of any of the wines in that post and the same score as I gave to a 2014 Barkan Classic Pinot Noir from Israel on Cellartracker that sells in the US for $8.99 and requires no aging. I gave the Barkan an extra point on Cellartracker because of its supreme value, which means had I posted it using my Good Vitis system it would’ve scored a 91 and been given an “A” value rating (my twin scoring method isn’t captured by Cellartracker’s analytics and so I gave it a 92). I would prefer that the value be kept out of the numerical score and captured separately by another rating. For the record, I would have given the Heredia Tondonia a B value rating at $40.

The final question I’ll pose is, do we need to relate one wine to the body of wine we’ve had in order to pass judgment? I’m not sure what the answer should be. If a wine can be judged in a bubble solely its merits, then we’re getting pretty solid insight in how that wine performs in its own right. If we didn’t do that it would be akin to saying we don’t like burritos because we don’t like Chipotle, which is pretty logically weak because it strips away all context from the relationship between the body and one of its parts. My reference-point wine critic is Stephen Tanzer because my tastes seem pretty similar to his: when he scores a wine, I’m likely to reasonably agree with that score. This is different from someone like Robert Parker, whose lower scored wines tend to be more to my liking than his higher scored wines. Knowing how my tastes line up with the critics’ is helpful in deciding whether I want to purchase a particular wine. They key to understanding how I align with these reviewers is their consistency and their ability to tie their scores to common wine characteristics, which can only be done if we relate one wine to others. This jury of one is still undecided on this question.

The subject of wine reviewing and scoring is a really contentious one that the wine academy will never find consensus on, but as you can see that doesn’t discourage us wine lovers from considering the viewpoints. The debate rages on…