Choice modelling


Choice modelling attempts to model the decision process of an individual or segment via revealed preferences or stated preferences made in a particular context or contexts. Typically, it attempts to use discrete choices in order to infer positions of the items on some relevant latent scale. Indeed many alternative models exist in econometrics, marketing, sociometrics and other fields, including utility maximization, optimization applied to consumer theory, and a plethora of other identification strategies which may be more or less accurate depending on the data, sample, hypothesis and the particular decision being modelled. In addition, choice modelling is regarded as the most suitable method for estimating consumers' willingness to pay for quality improvements in multiple dimensions.

Related terms

There are a number of terms which are considered to be synonyms with the term choice modelling. Some are accurate and some are used in industry applications, although considered inaccurate in academia.
These include the following:
  1. Stated preference discrete choice modeling
  2. Discrete choice
  3. Choice experiment
  4. Stated preference studies
  5. Conjoint analysis
  6. Controlled experiments
Although disagreements in terminology persist, it is notable that the academic journal intended to provide a cross-disciplinary source of new and empirical research into the field is called the Journal of Choice Modelling.

Theoretical background

The theory behind choice modelling was developed independently by economists and mathematical psychologists. The origins of choice modelling can be traced to Thurstone's research into food preferences in the 1920s and to random utility theory. In economics, random utility theory was then developed by Daniel McFadden and in mathematical psychology primarily by Duncan Luce and Anthony Marley. In essence, choice modelling assumes that the utility that an individual derives from item A over item B is a function of the frequency that he chooses item A over item B in repeated choices. Due to his use of the normal distribution Thurstone was unable to generalise this binary choice into a multinomial choice framework, hence why the method languished for over 30 years. However, in the 1960s through 1980s the method was axiomatised and applied in a variety of types of study.

Distinction between revealed and stated preference studies

Choice modelling is used in both revealed preference and stated preference studies. RP studies use the choices made already by individuals to estimate the value they ascribe to items - they "reveal their preferences - and hence values – by their choices". SP studies use the choices made by individuals made under experimental conditions to estimate these values – they "state their preferences via their choices". McFadden successfully used revealed preferences to predict the demand for the Bay Area Rapid Transit before it was built. Luce and Marley had previously axiomatised random utility theory but had not used it in a real world application; furthermore they spent many years testing the method in SP studies involving psychology students.

History

McFadden's work earned him the Nobel Memorial Prize in Economic Sciences in 2000. However, much of the work in choice modelling had for almost 20 years been proceeding in the field of stated preferences. Such work arose in various disciplines, originally transport and marketing, due to the need to predict demand for new products that were potentially expensive to produce. This work drew heavily on the fields of conjoint analysis and design of experiments, in order to:
  1. Present to consumers goods or services that were defined by particular features that had levels, e.g. "price" with levels "$10, $20, $30"; "follow-up service" with levels "no warranty, 10 year warranty";
  2. Present configurations of these goods that minimised the number of choices needed in order to estimate the consumer's utility function.
Specifically, the aim was to present the minimum number of pairs/triples etc of mobile/cell phones in order that the analyst might estimate the value the consumer derived from every possible feature of a phone. In contrast to much of the work in conjoint analysis, discrete choices were to be made, rather than ratings on category rating scales. David Hensher and Jordan Louviere are widely credited with the first stated preference choice models. They remained pivotal figures, together with others including Joffre Swait and Moshe Ben-Akiva, and over the next three decades in the fields of transport and marketing helped develop and disseminate the methods. However, many other figures, predominantly working in transport economics and marketing, contributed to theory and practice and helped disseminate the work widely.

Relationship with conjoint analysis

Choice modelling from the outset suffered from a lack of standardisation of terminology and all the terms given above have been used to describe it. However, the largest disagreement has proved to be geographical: in the Americas, following industry practice there, the term "choice-based conjoint analysis" has come to dominate. This reflected a desire that choice modelling reflect the attribute and level structure inherited from conjoint analysis, but show that discrete choices, rather than numerical ratings, be used as the outcome measure elicited from consumers. Elsewhere in the world, the term discrete choice experiment has come to dominate in virtually all disciplines. Louviere and colleagues in environmental and health economics came to disavow the American terminology, claiming that it was misleading and disguised a fundamental difference discrete choice experiments have from traditional conjoint methods: discrete choice experiments have a testable theory of human decision-making underpinning them, whilst conjoint methods are simply a way of decomposing the value of a good using statistical designs from numerical ratings that have no psychological theory to explain what the rating scale numbers mean.

Designing a choice model

Designing a choice model or discrete choice experiment generally follows the following steps:
  1. Identifying the good or service to be valued;
  2. Deciding on what attributes and levels fully describe the good or service;
  3. Constructing an Experimental design that is appropriate for those attributes and levels, either from a design catalogue, or via a software program;
  4. Constructing the survey, replacing the design codes with the relevant attribute levels;
  5. Administering the survey to a sample of respondents in any of a number of formats including paper and pen, but increasingly via web surveys;
  6. Analysing the data using appropriate models, often beginning with the Multinomial logistic regression model, given its attractive properties in terms of consistency with economic demand theory.

    Identifying the good or service to be valued

This is often the easiest task, typically defined by:
A good or service, for instance mobile phone, is typically described by a number of attributes. Phones are often described by shape, size, memory, brand, etc. The attributes to be varied in the DCE must be all those that are of interest to respondents. Omitting key attributes typically causes respondents to make inferences about those missing from the DCE, leading to omitted variable problems. The levels must typically include all those currently available, and often are expanded to include those that are possible in future – this is particularly useful in guiding product development.

Constructing an experimental design that is appropriate for those attributes and levels, either from a design catalogue, or via a software program

A strength of DCEs and conjoint analyses is that they typically present a subset of the full factorial. For example, a phone with two brands, three shapes, three sizes and four amounts of memory has 2x3x3x4=72 possible configurations. This is the full factorial and in most cases is too large to administer to respondents. Subsets of the full factorial can be produced in a variety of ways but in general they have the following aim: to enable estimation of a certain limited number of parameters describing the good: main effects, two-way interactions, etc. This is typically achieved by deliberately confounding higher order interactions with lower order interactions. For example, two-way and three-way interactions may be confounded with main effects. This has the following consequences:
Thus, researchers have repeatedly been warned that design involves critical decisions to be made concerning whether two-way and higher order interactions are likely to be non-zero; making a mistake at the design stage effectively invalidates the results since the hypothesis of higher order interactions being non-zero is untestable.
Designs are available from catalogues and statistical programs. Traditionally they had the property of Orthogonality where all attribute levels can be estimated independently of each other. This ensures zero collinearity and can be explained using the following example.
Imagine a car dealership that sells both luxury cars and used low-end vehicles. Using the utility maximisation principle and assuming an MNL model, we hypothesise that the decision to buy a car from this dealership is the sum of the individual contribution of each of the following to the total utility.
Using multinomial regression on the sales data however will not tell us what we want to know. The reason is that much of the data is collinear since cars at this dealership are either:
There is not enough information, nor will there ever be enough, to tell us whether people are buying cars because they are European, because they are a BMW or because they are high performance. This is a fundamental reason why RP data are often unsuitable and why SP data are required. In RP data these three attributes always co-occur and in this case are perfectly correlated. That is: all BMWs are made in Germany and are of high performance. These three attributes: origin, marque and performance are said to be collinear or non-orthogonal. Only in experimental conditions, via SP data, can performance and price be varied independently – have their effects decomposed.
An experimental design in a Choice Experiment is a strict scheme for controlling and presenting hypothetical scenarios, or choice sets to respondents. For the same experiment, different designs could be used, each with different properties. The best design depends on the objectives of the exercise.
It is the experimental design that drives the experiment and the ultimate capabilities of the model. Many very efficient designs exist in the public domain that allow near optimal experiments to be performed.
For example the Latin square 1617 design allows the estimation of all main effects of a product that could have up to 1617 configurations. Furthermore this could be achieved within a sample frame of only around 256 respondents.
Below is an example of a much smaller design. This is 34 main effects design.
0000
0112
0221
1011
1120
1202
2022
2101
2210

This design would allow the estimation of main effects utilities from 81 possible product configurations assuming all higher order interactions are zero. A sample of around 20 respondents could model the main effects of all 81 possible product configurations with statistically significant results.
Some examples of other experimental designs commonly used:
More recently, efficient designs have been produced. These typically minimise functions of the variance of the parameters. A common function is the D-efficiency of the parameters. The aim of these designs is to reduce the sample size required to achieve statistical significance of the estimated utility parameters. Such designs have often incorporated Bayesian priors for the parameters, to further improve statistical precision. Highly efficient designs have become extremely popular, given the costs of recruiting larger numbers of respondents. However, key figures in the development of these designs have warned of possible limitations, most notably the following. Design efficiency is typically maximised when good A and good B are as different as possible: for instance every attribute defining the phone differs across A and B. This forces the respondent to trade across price, brand, size, memory, etc; no attribute has the same level in both A and B. This may impose cognitive burden on the respondent, leading him/her to use simplifying heuristics that do not reflect his/her true utility function. Recent empirical work has confirmed that respondents do indeed have different decision rules when answering a less efficient design compared to a highly efficient design.
More information on experimental designs may be found here. It is worth reiterating, however, that small designs that estimate main effects typically do so by deliberately confounding higher order interactions with the main effects. This means that unless those interactions are zero in practice, the analyst will obtain biased estimates of the main effects. Furthermore he has no way of testing this, and no way of correcting it in analysis. This emphasises the crucial role of design in DCEs.

Constructing the survey

Constructing the survey typically involves:
Traditionally, DCEs were administered via paper and pen methods. Increasingly, with the power of the web, internet surveys have become the norm. These have advantages in terms of cost, randomising respondents to different versions of the survey, and using screening. An example of the latter would be to achieve balance in gender: if too many males answered, they can be screened out in order that the number of females matches that of males.

Analysing the data using appropriate models, often beginning with the [multinomial logistic regression] model, given its attractive properties in terms of consistency with economic demand theory

Analysing the data from a DCE requires the analyst to assume a particular type of decision rule - or functional form of the utility equation in economists' terms. This is usually dictated by the design: if a main effects design has been used then two-way and higher order interaction terms cannot be included in the model. Regression models are then typically estimated. These often begin with the conditional logit model - traditionally, although slightly misleadingly, referred to as the multinomial logistic regression model by choice modellers. The MNL model converts the observed choice frequencies into utility estimates via the logistic function. The utility associated with every attribute level can be estimated, thus allowing the analyst to construct the total utility of any possible configuration. However, a DCE may alternatively be used to estimate non-market environmental benefits and costs.

Strengths

Yatchew and Griliches first proved that means and variances were confounded in limited dependent variable models. This limitation becomes acute in choice modelling for the following reason: a large estimated beta from the MNL regression model or any other choice model can mean:
  1. Respondents place the item high up on the latent scale, or
  2. Respondents do not place the item high up on the scale BUT they are very certain of their preferences, consistently choosing the item over others presented alongside, or
  3. Some combination of and.
This has significant implications for the interpretation of the output of a regression model. All statistical programs "solve" the mean-variance confound by setting the variance equal to a constant; all estimated beta coefficients are, in fact, an estimated beta multiplied by an estimated lambda. This tempts the analyst to ignore the problem. However he must consider whether a set of large beta coefficients reflect strong preferences or consistency in choices, or some combination of the two. Dividing all estimates by one other – typically that of the price variable – cancels the confounded lambda term from numerator and denominator. This solves the problem, with the added benefit that it provides economists with the respondent's willingness to pay for each attribute level. However, the finding that results estimated in "utility space" do not match those estimated in "willingness to pay space", suggests that the confound problem is not solved by this "trick": variances may be attribute specific or some other function of the variables. This is a subject of current research in the field.

Versus traditional ratings-based conjoint methods

Major problems with ratings questions that do not occur with choice models are:

Ranking

Rankings do tend to force the individual to indicate relative preferences for the items of interest. Thus the trade-offs between these can, like in a DCE, typically be estimated. However, ranking models must test whether the same utility function is being estimated at every ranking depth: e.g. the same estimates must result from the bottom rank data as from the top rank data.

Best–worst scaling

is a well-regarded alternative to ratings and ranking. It asks people to choose their most and least preferred options from a range of alternatives. By subtracting or integrating across the choice probabilities, utility scores for each alternative can be estimated on an interval or ratio scale, for individuals and/or groups. Various psychological models may be utilised by individuals to produce best-worst data, including the MaxDiff model.

Uses

Choice modelling is particularly useful for: