Is there a “Winestream Media” Bias?

wine-critics-wine-searcher-comw-blake-gray

Credit: wine-seacher.com/ © Bob McClenahan/Stephen Tanzer/Nathaniel Welch; W. Blake Gray

On October 24th, the guys at Wine Curmudgeon released a study on whether American wine magazines were biased in favor of red wine. The anecdotal notion that red wines receive higher scores than white wines in these publications has been noticed for years, but this study does a much deeper dive into the data than anything I’ve seen. Their conclusion is the “winestream media” (great line) does indeed have a red wine bias because it gives far more 90+ point scores to red wines than whites. Unfortunately, though, the study’s methodology and data collection is not adequate to provide either (1) instructive data or (2) reliable analysis. As is said in the introduction, the data was provided on the condition of anonymity, which means that we know nothing about how it was collected, and further that it “was not originally collected with any goal of being a representative sample.” Therefore, the 14,885 white wine scores and 46,924 red wine scores don’t have context or relational relevance, and this hollows out any explanatory power the study could have had. Statisticians would say it was not statistically significant. The study is, however, quite interesting in the questions one can raise from it and I thank Wine Curmudgeon for that.

The central observation of the study, that more of the 90+ point wines are red than white, seems obvious to anyone who follows wine scores. This could be, as the study wonders, because we only know the scores that are reported, and publications are more likely to publish scores above 90. Further, “winemakers are likely to promote scores above 90.” This rationale seems likely, though it doesn’t tell us whether or why there is a red/white bias behind the scores. The study also wonders if this means red wines “are inherently better than white wines.” This is the question that got me thinking, though not in the direction the question would likely send someone.

I’ve noticed that reds tend to score better than whites, too, but then I’ve also generally scored reds higher than whites myself. Or so I thought until I looked at the wines I’ve reviewed on Cellartracker: 121 reds at an average score of 90.9 and 45 whites at an average score or 90.3. So, um? I’ve noticed that other wine drinkers, from the casual drinker to the expert, tend to show preferences for red wine as well, though there are exceptions. The best chardonnay from Burgundy and California (and increasingly Oregon), sauvignon blancs and semillions (and their blends) from Bordeaux, chenin blancs from parts of Loire, and reislings from Germany not only receive scores often well above 90, but come from regions where many of the reds produced in the same regions score well below what the best whites have achieved; that is to say, within certain regions the best whites and reds often both score well into the 90s. And because each region is often covered by the same critic, this observation would seem to suggest that something other than skin color plays a role in scoring.

This presents another question: can a critic who does not have a particular liking for one grape or blend give that grape or blend a high score based on factors like quality and complexity despite a disinclination towards that grape or blend? I don’t believe that they can. I’m just not a reisling person no matter how hard I try. I’ve had well-aged, super expensive reisling and I’ve had $18 bottles that I’m told are awesome values, and I can hardly tell the difference between the two. I like to think I have a good palate and am able to detect intricate nuances, but my taste buds don’t taste reisling’s notes well enough to discern between a “drink now” bottle and a cellar selection. And I imagine this is a very sad thing because reisling is supposed to be a wine collector’s mecca.

Another question, though a bit off topic: should the price-to-quality ratio, or “value,” be a variable in a wine’s score. I’ll use old school Rioja as an example. If you read my post on my most memorable reds, you’ll notice that it includes a ~$40 leathery Lopez de Heredia that I really enjoyed and scored 92 points, the lowest score of any of the wines in that post and the same score as I gave to a 2014 Barkan Classic Pinot Noir from Israel on Cellartracker that sells in the US for $8.99 and requires no aging. I gave the Barkan an extra point on Cellartracker because of its supreme value, which means had I posted it using my Good Vitis system it would’ve scored a 91 and been given an “A” value rating (my twin scoring method isn’t captured by Cellartracker’s analytics and so I gave it a 92). I would prefer that the value be kept out of the numerical score and captured separately by another rating. For the record, I would have given the Heredia Tondonia a B value rating at $40.

The final question I’ll pose is, do we need to relate one wine to the body of wine we’ve had in order to pass judgment? I’m not sure what the answer should be. If a wine can be judged in a bubble solely its merits, then we’re getting pretty solid insight in how that wine performs in its own right. If we didn’t do that it would be akin to saying we don’t like burritos because we don’t like Chipotle, which is pretty logically weak because it strips away all context from the relationship between the body and one of its parts. My reference-point wine critic is Stephen Tanzer because my tastes seem pretty similar to his: when he scores a wine, I’m likely to reasonably agree with that score. This is different from someone like Robert Parker, whose lower scored wines tend to be more to my liking than his higher scored wines. Knowing how my tastes line up with the critics’ is helpful in deciding whether I want to purchase a particular wine. They key to understanding how I align with these reviewers is their consistency and their ability to tie their scores to common wine characteristics, which can only be done if we relate one wine to others. This jury of one is still undecided on this question.

The subject of wine reviewing and scoring is a really contentious one that the wine academy will never find consensus on, but as you can see that doesn’t discourage us wine lovers from considering the viewpoints. The debate rages on…