The Wine Rules – The Degree of Difficulty
by Dudley Brown
In my pre-teen years it seemed that if it wasn’t the about the fastest, highest, strongest, furthest, it just wasn’t sport. I think it was Olga Korbut that first caused me to have some doubt about this and perhaps Dorothy Hamill that crushed the thought forever.
Simply put, there are sports and events that require judging to establish the “best.” The most common method they have for doing this involves requiring a compulsory set of things the athlete must do – the minimum standard for that level of competition – as well as a scoring system for the stuff that exceeds that standard.
In diving, there is a calculus for each dive called the “degree of difficulty” whereby a very good but very difficult dive can trump an excellent but technically easier dive. The converse is also true which makes an athlete gamble with self-knowledge rather than with the judges. Moreover there are some statistical “fail safes” put in place where there are frequently five to seven judges scoring and the highest and lowest scores get thrown out and that sort of thing.
It’s a process that has been debated, refined, studied and statistically analyzed for favoritism and accuracy over thousand of events over many decades. No one seriously objects to this state of affairs. It doesn’t make everyone happy every day but it makes sense and largely works.
Now imagine a slightly different state of affairs where a few judges passed in scores on each contestant based on just three physical qualities or properties. And, that there was no weight given to the difficulty of the performance. What about an event where a single judge just writes a few sentences describing the dive or performance and hands out a score out of 10?
To me, the former scenario sounds more like Australian Idol or MasterChef, and the latter more like a film review rather than a properly judged event. Dare we suggest that the former is also a bit like a wine show and the latter a bit like a wine critic?
My point is not to cane wine shows or critics. There are very good reasons for both to exist, continue and improve. There are also very good reasons to scrutinize both – and for both to welcome the same scrutiny they offer to wine – without fear or favor in either direction.
Without debating the merits of each (for the record – between the two I tend to prefer better critics mostly because they advance a wider discussion about particular wines of merit than shows), I think we can all accept that they are often mutually reinforcing institutions for both good and bad principally due to the overlap of those performing the judging.
Operating from the following premises, we can conclude certain things:
Premise #1 – Variability of prices and qualities of wine are high as well as a large number of brands, varieties, etc. In short, there is too much information for the consumer to process objectively in order to definitively avoid bad or potentially embarrassing purchases. Conversely, they are not always able to make good choices with assurance.
Premise #2 – Most wine buyers like to find high quality wines at various price points. Wine show winners and high scores provide some assurance to do so. As a result, there is a significant premium (either in assurance or price) for producers obtained by wine show trophies and high scores.
Premise #3 Judges and critics can only be expected to judge entrants or submissions, not non-entrants or non-submitters. Many wineries choose not to submit / enter. As a result, incomplete data sets and imperfect benchmarks for judges / critics are the norm.
Premise #4 Due to seeming or actual randomness of results inherent in shows and reviews, certain unintended behavior is consequential for entrants.
Premise #5 Most wine critics are (were) paid by media companies. Most wine judges are not paid despite often being the same people.
Conclusion #1 The potential reward for a wine of lower quality or of relatively lesser expense getting lucky at a wine show far exceeds the entry fee and / or the risk of not getting lucky because the producer already knows (or should know) the unhappy truth about the wine’s merits.
Conclusion #2 The risk of economic or brand damage to a very good or very expensive wine getting poor marks at a wine show far exceeds the entry fee.
Conclusion #3 Because the number of undistinguished entrants naturally far exceed the distinguished, and wine shows charge fairly hefty entrance fees, most wine shows are unlikely to go out of business for any reason unless a significant portion of entrants believe that shows offer them no potential credibility. In short, the wine show game will keep spinning pretty much without respect to its relative rigor, credibility or transparency.
Conclusion #4 Professional wine journalists / critics, on the other hand, have a problem. Media companies are shedding every cost they can at the same time that free wine blogs erode the market for paid wine journalism. In the parlance, critics have a “business model problem” if they do not have a subscription revenue stream to underpin their independence.
Conclusion #5 While wine shows are big (and still growing) money makers for many organisers, most are steadily losing the credibility sweepstakes for reasons such published research suggesting they are little more than guessing games.
The capital city wine shows are big money spinners typically owned by the royal societies or fairgrounds. While this is the same for regional shows, the regional shows tend to plow the money back into regional wine promotion, education and the like. How the substantial profits of the capital city shows are used to benefit the wine industry is less clear. This near total lack of financial transparency does little to add to the optics.
The solution of employing newly starving critics as professional wine show judges has merit but there are cultural issues to manage such as the “getting eagles to fly in formation” problem. Traditional critics are not necessarily attuned to consensus decisions or acceding to senior judges. The bigger problem seems to currently be that paying judges could raise show costs and thereby fees which could decrease revenues through lower participation.
So how else could we resolve the tensions in these unintended consequences? On one hand, the wine shows could find a way to make shows more appealing to their current high-end non-entrants to raise revenues. On the other hand, a too rigorous show might discourage entrants who are only “hoping to get lucky.” Given trade-off effects like this, it is understandable how shows got in the pickle they are in.
My modest proposal is that if wine shows wish to be taken as seriously as it seems they wish to be as well as maintaining the serious cash streams that pour through them, they should incorporate systems similar to those employed in athletics to provide additional rigor and logic to what is ultimately a subjective and contextual process.
In tennis – even at the major championship level – the umpires and line judges were amateurs up until maybe thirty years ago. That head umpire that McEnroe was screaming at the US Open was likely to be a stockbroker or doctor he knew from the Forest Hill Tennis Club.
Once tennis professionalized officiating, quality improved dramatically. Once professionals started getting judged against videotape of matches, quality improved again. With the Hawkeye system, it has improved still more. Is more rigor what wine evaluation needs? Despite the “in or out?” binary / digital nature of line calls not being akin to the multifactor / analog nature of wine evaluation, the professional standards that tennis officials are held to offer much to learn from. Moreover, the streams of data that emerge have created instant feedback loops that change how the game is played in realtime.
From the “scored” sports, what if we borrowed something similar to the “degree of difficulty” concept? For example, wines from difficult vintages / regions that stand out could be recognized as such. Similarly, single variety and / or single vineyard wines are much less forgiving to make than blends of either and should be acknowledged for same. Ditto for wines certified organic or biodynamic.
Another proposal would be to include a few controlled groups of samples in each show with all results sent to one statistician or economist for analysis. Enough data from enough shows would provide a great area of study for a statistician or wine economist to study over long periods of time and could be relatively inexpensive as a result. In fact, it could be an entire career in a much more interesting field than the ones many statisticians toil in.
That some judges view the multi-year statistical work done at wine shows in California as a “gotcha” bit of tricky business rather than an embedded step of a long-term quality improvement process seems out of step with embracing the bits that science does well. Were a tested data driven approach systematically embraced, the effects could be profoundly positive for the industry as a whole.
For instance, judges could have a career “report card” that assesses their scoring both in relation to the other judges from each show as well their consistency against the control group wines both on the day and over time. The data would change over time offering insights into the judges’ capabilities and professional development. It could throw out the high and low scores to get better long-term readings, etc.
This approach would allow individual judges’ variances to be assessed at multiple shows with an annual assessment or rating as with sports officials. It would provide a platform for assuring entrants of the relative merits of each show, individual judges’ development, a rated field to work with in assembling the best judges for shows, the basis for a pricing system for shows and judges, etc. While it could be as revolutionary for wine shows as “Moneyball” was for baseball, who is prepared to be the wine industry’s Billy Beane? (If it is any encouragement to the timid, Brad Pitt played Beane in the movie…)
With transparent and higher quality assessment comes the opportunity for value creation at shows currently unimagined. For instance, specialization around quality, region, variety etc. could all be big (think global) revenue opportunities for judges and shows that create more value for brands and consumers with higher levels of assurance and interest.
My question is whether judges are willing to receive the level of assessment and scrutiny they regularly apply to wine to get to a brave new world where the show system can afford for them to be professionals? To do so, they would have to let the wine (and the data) rule.
My opinion is that we are currently only scratching the surface of industry value creation and are letting tradition function as an imperfect proxy for excellence. Media, critics and wine shows still have a lot of competing and collaborating to do to transcend the current approaches and create the value and interest that the consumer craves.
Let the innovation begin.
What do you think?