In 1224—in his poem Battle of the Wines—Henry d’Andeli tells the story of a famous wine tasting organized by the French king Philip Augustus. In this tasting, samples from across Europe were tasted and judged by an English priest. The priest classified the wines as either ‘celebrated‘, in the case of those which pleased him, or ‘excommunicated‘ for those that did not. Today, the wine industry is a $354.7 billion global market. Irrespective of size, the industry contends with an unenviable challenge. Wine quality cannot be ascertained ex-ante. For this reason, the industry contends with an information asymmetry problem. The producer, distributor or retailer involved in each economic transaction commonly possesses greater material knowledge than the general consumer. Where this is the case, systems emerge which attempt to address this imbalance. Filmmakers spend millions creating trailers, in literature, there are renowned awards such as the Booker Prize, and in the wine industry there have been scores and competitions. From well-established, renowned international awards to small, emerging regional competitions, the format and scale is broad and diverse. However, in a marketplace where applications like Vivino provide consumers with immediate community-generated reviews, whether competitions are effective tools in establishing objective, qualitative benchmarks to aid purchasing decisions or simply serve as revenue-generating fruitless endeavours is not altogether clear. In this article, I dissect the research on wine competitions and discuss what producers should consider before entering their wines.
The reasons for entering a wine competition are as broad and diverse as the competitions themselves. While traditionally it was hoped that medals issued in competitions would serve as qualitative benchmarks for consumers and producers, emerging competitions, particularly regional awards, are focussing more intensely on the marketing opportunity afforded by their competition. I am aware of at least one UK-based competition which asks prospective judges to declare their social media followings in their initial application. It is commonplace for competitions to tout the benefits associated with entering, these often include the ability to augment price, qualitative benchmarking against peers, increased recognition, better-informed consumer decision making, and marketing exposure. Whilst it may be true that data can be found to support some of these claims, it is misleading to suggest that all competitions are equal in what they are able to offer, or that this data is static, in that what was true at a point in time is true thereafter. What a producer can expect to benefit from entering a competition depends very much on the nature of the market, the way the wines are judged and medals awarded, and the size and prestige of the competition.
The reliability of award judging
In emerging markets and for less-established wineries, benchmarking against one’s peers is an important task. In order to establish a valuable, objective, and qualitative benchmark judging must be consistent, free of bias, and take place within a rigid framework. If this is not the case, the very notion of a competition serving to benchmark is little more than lip service. But how consistent or reliable is wine judging and to what extent does it serve as a reliable benchmark?
Since Orley Ashenfelter published ‘Tales from the Crypt‘ in 2006, in which Auctioneer Bruce Kaiser tells of the trials and tribulations of being a wine judge, the consistency of wine judging has been in question. In 2008, Robert Hodgson explored judge reliability at the California State Fair Wine Competition. In his study, spanning 2005 to 2008, panels of four expert judges received a flight of 30 wines using triplicate samples poured from the same bottle. The results showed that only 10% of the judges were able to replicate their score within the same medal group and another 10% scored the same wine which had received a Bronze as Gold. By subjecting varying panel results to ANOVA, Hodgson found that for 50% of the wines analysed, the variation in the evaluation was exclusively determined by the quality of the wine. However, for the other half of the wines, biases in the judges’ evaluations influenced the scores received by the wines.

The following year, Hodgson explored concordance amongst 13 U.S. wine competitions. An analysis of over 4000 entries showed little concordance in awarding gold medals. Of the 2,440 wines entered in more than three competitions, 47% received Gold medals, but 84% of these same wines also received no award in another competition. An analysis of the number of gold medals received in multiple competitions indicated that the probability of winning a gold medal at one competition is stochastically independent of the probability of receiving gold at another competition, indicating that winning a Gold medal is greatly influenced by chance alone. In 2011, Michael Patrick Allen and John Germov reviewed the scores received by more than 5000 wines entered in four capital city wine competitions in 2007. Similar to Hodgson there was only a moderate degree of agreement between judges in terms of the medals awarded to wines entered into multiple competitions.
In 2018, Emmanuel Paroissien and Michael Visser found that only a minority of contests attribute medals that are significantly correlated with quality, primarily the ones founded a long time ago, and whose judges are required to evaluate relatively few wines per day. Furthermore, in 2009, Robert Hodgson expanded his research to explore exactly how ‘expert’ expert wine judges really were. Using Cohen’s kappa, a statistic which measures the agreement between two raters to assess judge reliability corrected for chance, Hodgson quantified judge consistency. Suggesting a value of 0.7 for Cohen’s weighted kappa, less than 30% of judges who participated in either of the two studies completed by Hodgson would be considered “expert.”
Available literature studying the reliability of judging is worrying, to say the least, raising serious questions about whether medals awarded are an adequate indicator of quality at all. Notwithstanding poor competition methodology, humans are also subject to a number of well-documented and well-studied biases, even when judging alone. For example, how we judge performance is affected by the most recent score we give, our brains use immediate events as points of reference. If the previous thing receives a high score, it improves our evaluation of the current thing. On the other hand, when the previous performance is scored poorly, it decreases the evaluation of the current thing. This is known as an assimilation effect (well documented here, here, here, and here) Conversely, the contrast effect sees a thing given a low score when the previous thing is given a high score and vice versa (well-documented here and here) Finally, there is sequence bias, people are affected by where an item or thing appears in a sequence, it is well-established that first and last items are remembered best and judged more positively.

Regardless of how ‘expert’ you believe your panel to be, they are, as are all humans, subject to significant bias. It is paramount to reliability that competition methodology is rigorous. Research indicates that even where the most ‘expert’ of judges are chosen, wines are often awarded medals on chance, not quality. Whilst it does raise concerns, this does not discredit all wine competitions. It does, however, provide key pointers as to how competitions should be run and what producers should look for prior to entering their wines into a competition.
If I were a producer submitting wine for the purpose of benchmarking, I would avoid any competition where scoring is determined by a number of different panels who sit together discussing sets of wines opposed to a panel tasting all wines a number of times. There can be absolutely no question that scores given in this type of setting are subject not only to individual bias but are almost inevitably influenced by the group. Even where samples are then reviewed by a second panel, though competition hosts may think this adds rigour, there’s no evidence to show that it does. These panellists are themselves subject to the exact same bias as those before them.
A competition where each wine is judged independently, double-blind, multiple times, by the same set of judges, perhaps using rank-order scoring to account for generous or lenient scoring, where the amount of awards is limited and results only accepted within an agreed Kappa score would go some way toward establishing a ‘gold standard’ of judging. It may also be helpful for competitions to analyse and publish their results each year using an appropriate and rigorous statistical method to recognise and address bias. Publishing vague statistics about the returns available to producers is somewhat misleading without making available the evidence to support the reliability and unbiased nature of your competition.
Wine awards and price augmentation
Whether an accurate qualitative judgement is made or not, it’s more than likely that amongst the hopes of producers is that with a medal comes the ability to augment price. While it is, in fact, true that winning a medal can allow producers to augment price, it is by no means a guarantee and is dependent on a number of factors. One UK competition claims that 70% of consumers are willing to spend more on an award-winning wine. This may well have been true at some point in time, in a particular market, in a particular category, or particular competition. And while there is truth in this claim, it is to some extent misleading in that it is not the case that winning a medal in any competition results in the ability to increase pricing.
Emmanuel Paroissien and Michael Visser of the American Association of Wine Economists have published perhaps the most rigorous analysis of the causal impact of medals on producer prices. The pair, focussing on the wines of Bordeaux, collected data from eleven wine competitions. The estimator used by the pair indicates that a producer whose wine received a medal in one of these competitions can augment his or her price by 13%. The impact for gold turned out to be much larger than for silver and bronze. When the pair allowed the medal effect to differ across competitions, they find that only for a small group of contests there is a statistically significant effect. This small group was made up of the most prestigious competitions that have been founded a long time ago. Judges in this small group are required to evaluate relatively few wines per day, and grant medals by oral consensus with Decanter World Wine Awards performing particularly well.

The contests which underperformed in terms of an awarded medals effects on pricing had juries that are either entirely made up of amateurs or are a mix of amateurs and professionals. They charge the lowest entry fees and sticker prices, attract the lowest number of participants and are among the most recently founded competitions. Additionally, it’s worth approaching ‘studies’ which claim to observe astronomical uplifts in sales (some by up to 700%) with some degree of skepticism. Particularly those organised by the competitions themselves. Rarely are these studies comprehensive not of the same calibre as those referenced here.
My interpretation of price augmentation as it relates to award labels is that while it is indeed possible to increase prices following the awarding of a label, to what extent depends almost entirely on the label awarded, how established the competition is, how rigorous its controls are, and how it is judged. It is unlikely that an emerging, relatively unknown competition will influence the price a producer is able to charge to either distributors or consumers.
Consumer perception of award labels
Whether or not medal-winning labels direct or aid consumers are two entirely different questions. One could argue that while the presence of a label may direct the consumer toward a wine, simply by its presence, it does not mean that the consumer is going to be pleased with their purchase nor that they are being directed toward a quality wine. This is particularly pertinent given that in the case of some UK competitions more than half of the wines entered into competitions are awarded a medal (over 80% in some cases) and where in some cases the rigour of judging is far from consistent. So, what do the public think about award labels?
A study published in the Journal of Retailing and Consumer Services in 2017 sought to better explain to what extent wine awards informed purchasing decisions. Focus groups took place between 44 people across 4 sessions (50% women). Participants completed questionnaires to determine their involvement or familiarisation with wine. Within each focus group, there was a mix of participants self-reporting as experts, high involvement consumers, medium involvement consumers, low involvement consumers, and wine beginners. Three themes emerged from the focus groups, these were a sense of scepticism about the volume of wine awards, concern over the confusing and sometimes misleading nature of wine awards and concern over a lack of transparency in wine awards. Participants mentioned they often see more wine bottles with wine award stickers than without. Some thought the wine award stickers were present just to make the bottles “look prettier”.

The researchers suggested that wineries may want to consider entering only more prestigious wine competitions or only put awards from those competitions that are more well-known on the bottle. Additionally, if the wine won anything less than gold, the winery might want to consider leaving this information off the bottle, as it may have a negative influence on the purchase behaviour of consumers.
In 2002, Orth and Krska interviewed 69 respondents in local Czech wine shops. Their answers to Likert-scale type questions indicated that medals do not appear to be very important for Czechs, but their answers to a choice experiment confirmed they gave equal importance to price and the presence of awards on labels. Similarly, in 2006, Lockshin et al published a study using a sample of 250 Australian regular wine drinkers. The results showed that that low-involved wine consumers tended to react more positively to gold medals on wines sold at lower price points, but the effect decreased as prices rose. Highly-involved consumers relied less on gold medals, but their purchase decisions were somewhat influenced by them at lower prices.
To date, the most comprehensive study about the importance of award labels was published in 2009 by Dr Steve Goodman in the International Journal of Wine Business Research. Goodman’s research was conducted with 15 other researchers from 13 different countries assessing the elements driving consumers’ choices in a retail environment. The study, based on more than 2500 consumers, found that medals and awards were, on average, only the eighth out of 13 elements driving consumers’ choices.
Whether award labels serve as useful quality signals or obfuscate decision making is unclear. The world has changed, consumers are savvier and have more information readily available, mobile applications provide immediate community-generated reviews and the sheer number of awards means that labels simply do not stand out the way they once did. Producers should carefully consider post-application whether prospective competitions add significant value to their brand. In the case of producers whose wines are not sold in supermarkets, will a medal label really bolster your sales? The answer is not entirely clear.
Marketing machines or consumers aids?
With the advance of the internet, wine competitions boast an increasing online presence. More so than ever producers are competing for space online where creating great content can profoundly impact your bottom line. Emerging regional competitions are evidently acutely aware of this battle for online presence given the manner in which they emphasise their PR and marketing efforts as reasons to enter. But is the competition really offering you some you can’t achieve yourself? In the case of relatively unknown emerging competitions, it is unlikely anybody outside of the industry has heard of them nor that they have brand power in the minds of consumers. Producers ought to ask themselves whether prospective competitions have anything significant to offer which they cannot achieve more reputably themselves. Particularly given the aforementioned inconsistency of judging and ambiguous nature of price augmentation.
I am aware of at least one UK competition which requests prospective judges share the number of social media followers they have and another which selects judges based on their inherent ability to market the event itself through their own social media platforms. Whilst it could be argued that the marketing associated could be useful to augment price and increase awareness (this may well be the case for large shows like Decanter), where marketing takes the forefront ahead of consistent and rigorous judging, the value added by the competition to both consumers and producers is in doubt. Where emerging competitions are handing out medals to over 80% of entrants I am inclined to believe they are incentivising more entrants in order to increase revenue, suggesting their primary focus may be to generate revenue and sell sub-standard marketing to entrants opposed to providing a reliable, qualitative, consumer aid. Ultimately the incentive to judge rigorously and hand out fewer medals does not exist for emerging competition, they do not have the prestige to take this approach.
I am rather sceptical about the consistency of judging at wine competitions and whether their ever-expanding categories are reliable indicators of quality or merely fruitless, antiquated endeavours. I am fairly confident that in years to come, all but the most reputable competitions will fade into obscurity with the public favouring sources of information they deem to be more reliable and nonpartisan. I would urge producers to ask themselves what they hope to gain from prospective competitions and to what extent those competitions are able to influence individuals outside of the wine industry itself. If you desire an objective benchmark of quality, choose carefully. If you’re looking for an affordable marketing option, ask yourself who your wines will be marketed to. If you hope to help inform consumers, ask yourself just how helpful award labels are in the case of competitions handing them out to 87% of entrants.
Wine scores are training wheels for wine novitiates.