Best–worst scaling


Best–worst scaling techniques involve choice modelling and were invented by Jordan Louviere in 1987 while on the faculty at the University of Alberta. In general with BWS, survey respondents are shown a subset of items from a master list and are asked to indicate the best and worst items. The task is repeated a number of times, varying the particular subset of items in a systematic way, typically according to a statistical design. Analysis is typically conducted, as with DCEs more generally, assuming that respondents makes choices according to a random utility model. RUMs assume that an estimate of how much a respondent prefers item A over item B is provided by how often item A is chosen over item B in repeated choices. Thus, choice frequencies estimate the utilities on the relevant latent scale. BWS essentially aims to provide more choice information at the lower end of this scale without having to ask additional questions that are specific to lower ranked items.

History

Louviere attributes the idea to the early work of Anthony A. J. Marley in his PhD thesis, who together with Duncan Luce in the 1960s produced much of the ground-breaking research in mathematical psychology and psychophysics to axiomatise utility theory. Marley had encountered problems axiomatising certain types of ranking data and speculated in the discussion of his thesis that examination of the "inferior" and "superior" items in a list might be a fruitful topic for future research. The idea then languished for three decades until the first working papers and publications appeared in the early 1990s. The definitive textbook describing the theory, methods and applications was published in September 2015 by Jordan Louviere, Terry N Flynn and Anthony A. J Marley. The book brings together the disparate research from various academic and practical disciplines, in the hope that replication and mistakes in implementation are avoided. The three authors have already published many of the key academic peer-reviewed articles describing BWS theory, practice, and a number of applications in health, social care, marketing, transport, voting, and environmental economics. However, the method has now become popular in the wider research and practitioner communities, with other researchers exploring its use in areas as diverse as student evaluation of teaching, marketing of wine, quantification of concerns over ADHD medication, the importance of environmental sustainability, and priority-setting in genetic testing.

Purposes

There are two different purposes of BWS – as a method of data collection, and/or as a theory of how people make choices when confronted with three or more items. This distinction is crucial, given the continuing misuse of the term maxdiff to describe the method. As Marley and Louviere note, maxdiff is a long-established academic mathematical theory with very specific assumptions about how people make choices: it assumes that respondents evaluate all possible pairs of items within the displayed set and choose the pair that reflects the maximum difference in preference or importance.

As a theory of process (theory of decision-making)

Consider a set in which a respondent evaluates four items: A, B, C and D. If the respondent says that A is best and D is worst, these two responses inform us about five of six possible implied paired comparisons:
The only paired comparison that cannot be inferred is B vs. C. In a choice among five items, MaxDiff questioning informs on seven of ten implied paired comparisons. Thus BWS may be thought of as a variation of the method of Paired Comparisons.
Yet respondents can produce best-worst data in any of a number of ways. Instead of evaluating all possible pairs, they might choose the best from n items, the worst from the remaining n-1, or vice versa. Or indeed they may use another method entirely. Thus it should be clear that maxdiff is a subset of BWS. The maxdiff model has proved to be useful in proving the properties of a number of estimators in BWS. However, its realism as a description of how humans might actually provide best and worst data can be questioned for the following reason. As the number of items increases, the number of possible pairs increases in a multiplicative fashion: n items produces n pairs. To assume that respondents do evaluate all possible pairs is a strong assumption and in 14 years of presentations, the three co-authors have virtually never found a course or conference participant who admitted to using this method to decide their best and worst choices. Virtually all admitted to using sequential models.
Early work did use the term maxdiff to refer to BWS, but with the recruitment of Marley to the team developing the method, correct academic terminology has been disseminated throughout Europe and Asia-Pacific. Indeed, it is an open question whether major software manufacturers of discrete choice maxdifff routines actually implement maxdiff models in estimating parameters, despite this continuing advertising of maxdiff capabilities.

As a method of data collection

The second use of BWS is as a method of data collection. BWS can, particularly in the age of web-based surveys, be used to collect data in a systematic way that forces all respondents to provide best and worst data in the same way ; Enables collection of a full ranking, if repeated BWS questioning is implemented to collect the "inner rankings". In many contexts, BWS for data collection has been regarded merely as a way to obtain such data in order to facilitate data expansion or to estimate conventional rank ordered logit models.

Types ("cases")

The renaming of the method, to make clear that maxdiff scaling is BWS but BWS is not necessarily maxdiff, was decided by Louviere in consultation with his two key contributors in preparation for the book, and was presented in an article by Flynn. That paper also took the opportunity to make clear that there are, in fact, three types of BWS: Case 1, Case 2 and Case 3. These three cases differ largely in the complexity of the choice items on offer.

Case 1 (the "object case")

Case 1 presents items that may be attitudinal statements, policy goals, marketing slogans or any type of item that has no attribute and level structure. It is primarily used to avoid scale biases known to affect rating scale data. It is particularly useful when eliciting the degree of importance or agreement that respondents ascribe from a set of statements and when the researcher wishes to ensure that the items compete with each other.

Case 2 (the "profile case")

Case 2 has predominated in health and the items are the attribute levels describing a single profile of the type familiar to choice modellers. Instead of making choices between profiles, the respondent must make best and worst choices within a profile. Thus, for the example of a mobile phone, the choices would be the most acceptable and least acceptable features of a given phone. Case 2 has proved to be powerful in eliciting preferences among vulnerable groups, such as the elderly, older carers, and children, who find conventional multi-profile discrete choice experiments difficult. Indeed, the first comparison of Case 2 with a DCE in a single model found that whilst the vast majority of respondents provided usable data from the BWS task, only around one half do so for the DCE.

Case 3 (the "multi-profile case")

Case 3 is perhaps the most familiar to choice modellers, being merely an extension of a discrete choice model: the number of profiles must be three or more, and instead of simply choosing the one the respondent would purchase, he chooses the best and worst profile.

Designs for studies

Case 1 BWS studies typically use Balanced Incomplete Block Designs. These cause every item to appear the same number of times and also force every item to compete with every other the same number of times. These features are attractive since the respondent is prevented from inferring erroneous information about the items. They also ensure that there can be no "ties" in importance/salience at the very top or bottom of the scale.
Case 2 BWS studies can use Orthogonal Main Effects Plans or efficient designs, although the former has predominated to date.
Case 3 BWS studies may use any of the types of design typically used for a DCE, with the proviso that the number of profiles in a choice set must be three or more for the BWS task to make sense.

Recent history

Steve Cohen introduced BWS to the marketing research world in a paper presented at an ESOMAR Conference in Barcelona in 2002 entitled, "Renewing market segmentation: Some new tools to correct old problems." This paper was nominated for Best paper at that conference. In 2003 at the ESOMAR Latin America Conference in Punta del Este, Uruguay, Steve and his co-author, Dr. Leopldo Neira, compared BWS results to those obtained by rating scale methods. This paper won Best Methodological Paper at that conference. Later the same year, it was selected as winner of the John and Mary Goodyear Award for Best Paper at all ESOMAR Conferences in 2003 and then it was published as the lead article in "Excellence in International Research 2004," published by ESOMAR. At the 2003 Sawtooth Software Conference, Steve Cohen's paper, "Maximum Difference Scaling: Improved Measures of Importance and Preference for Segmentation," was selected as Best Presentation. Cohen and Sawtooth Software president Bryan Orme agreed that MaxDiff should be part of the Sawtooth package and it was introduced later that year. Later in 2004, Cohen and Orme won the David K. Hardin Award from the AMA for their paper which was published in Marketing Research Magazine entitled, "What's your preference? Asking survey respondents about their preferences creates new scaling decisions."
In parallel to this, Emma McIntosh and Jordan Louviere introduced BWS to the health community at the 2002 Health Economists' Study Group conference. This prompted the collaboration with Flynn and ultimately the link-up with Marley, who had begun working with Louviere independently to prove the properties of BWS estimators. The popularity of the three cases has largely varied by academic discipline, with case 1 proving popular in marketing and food research, case 2 largely being adopted in health, and case 3 being used across a variety of disciplines that already use DCEs. It was partly this lack of understanding in many disciplines that there are actually three cases of BWS that prompted the three main developers to write the textbook.
The book contains an introductory chapter summarising the history of BWS and the three cases, together with why the respondent must think whether he wishes to use it to understand theory of decision-making and/or merely to collect data in a systematic way. Three chapters, one for each case, follow, detailing the intuition and application of each. A chapter bringing together Marley's work proving the properties of the key estimators and laying out some open issues then follows. After laying out open issues for further analysis, nine chapters then follow.

Conducting a study

The basic steps in conducting all types of BWS study are:
Estimation of the utility function is performed using any of a variety of methods.
  1. multinomial discrete choice analysis, in particular multinomial logit. The multinomial logit model is often the first stage in analysis and provides a measure of average utility for the attribute levels or objects.
  2. In many cases, particularly cases 1 and 2, simple observation and plotting of choice frequencies should actually be the first step, as it is very useful in identifying preference heterogeneity and respondents using decision-rules based on a single attribute.
  3. Several algorithms could be used in this estimation process, including maximum likelihood, neural networks, and the hierarchical Bayes model. The Hierarchical Bayes model is beneficial because it allows for borrowing across the data, although since BWS often allows the estimation of individual level models, the benefits of Bayesian models are heavily attenuated. Response time models have recently been shown to replicate the utility estimates of BWS, which represents a major step forward in the validation of stated preferences generally, and BWS preferences specifically.

    Advantages

BWS questionnaires are relatively easy for most respondents to understand. Furthermore, humans are much better at judging items at extremes than in discriminating among items of middling importance or preference. And since the responses involve choices of items rather than expressing strength of preference, there is no opportunity for scale use bias.
Respondents find these ratings scales very easy but they do tend to deliver results which indicate that everything is "quite important", making the data not especially actionable. BWS on the other hand forces respondents to make choices between options, while still delivering rankings showing the relative importance of the items being rated. It also produces:
Best–worst scaling involves the collection of at least two sets of data: at a minimum, first-best and first-worst, and in some cases additional ranks The issue of how to combine these data is pertinent. Early work assumed best was simply the inverse of worst: that respondents had an internal ranking of all items and just chose the highest/lowest ranked item in a given question. More recent work has suggested that in some contexts this is not the case: a person might choose according to traditional economic theory for best but choose worst using an elimination by attributes strategy. In the presence of such different decision rules it becomes impossible to know how to combine the data: at what point does the person, when moving down the rankings, move from "economic trading" to "elimination by aspects".
This presents a clear problem for the data augmentation motivation for BWS but not necessarily for BWS when used as a way to understand process. Psychologists in particular would be particularly interested in the different types of decision-making. Marketers, also, might wish to know if a given product had an unacceptable feature. Work is ongoing to investigate when different decision rules arise, and whether/how data from such different sources may be combined.
BWS also suffers from the same disadvantages of all stated preference techniques. It is unknown if the preferences are consistent with choices made in the real world. In some instances revealed preferences are available, providing a test of the BWS choices. In others, quite often health, there are no revealed preference data and validation appears impossible. More recently attempts have been made to validate SP data using physiological data, such as eye-tracking and response times. Early work suggests that response time models are consistent with results from BWS models in health care but more research is required in other contexts.