Single-Choice, Repeated-Choice, and Best-Worst Elicitation Formats: Do Results Differ and by How Much?

This paper presents what we believe to be the most comprehensive suite of comparison criteria regarding multinomial discrete-choice experiment elicitation formats to date. We administer a choice experiment focused on ecosystem-service valuation to three independent samples: single-choice, repeated-choice, and best-worst elicitation. We test whether results differ by parameter estimates, scale factors, preference heterogeneity, status-quo/action bias, attribute non-attendance, and magnitude and precision of welfare measures. Overall, we find very limited evidence of differences in attribute parameter estimates, scale factors, and attribute increment values across elicitation treatments. However, we find significant differences in status-quo/action bias across elicitation treatments, with repeated-choice resulting in greater proportions of “Yes” votes, and consequently, higher program-level welfare estimates. Also, we find that single-choice yields drastically less-precise welfare estimates. Finally, we find significant differences in attribute non-attendance behavior across elicitation formats, although there appears to be little consistency in class shares even within a given elicitation treatment.


Issue Date:
2015-11
Publication Type:
Working or Discussion Paper
PURL Identifier:
http://purl.umn.edu/212479
Total Pages:
44
JEL Codes:
Q51; Q57
Series Statement:
Working Paper
2015-3




 Record created 2017-04-01, last modified 2017-08-28

Fulltext:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)